NASA Astrophysics Data System (ADS)
Metz, P. D.
A FORTRAN computer program called GROCS (GRound Coupled Systems) has been developed to study 3-dimensional underground heat flow. Features include the use of up to 30 finite elements or blocks of Earth which interact via finite difference heat flow equations and a subprogram which sets realistic time and depth dependent boundary conditions. No explicit consideration of mositure movement or freezing is given. GROCS has been used to model the thermal behavior of buried solar heat storage tanks (with and without insulation) and serpentine pipe fields for solar heat pump space conditioning systems. The program is available independently or in a form compatible with specially written TRNSYS component TYPE subroutines. The approach taken in the design of GROCS, the mathematics contained and the program architecture, are described. Then, the operation of the stand-alone version is explained. Finally, the validity of GROCS is discussed.
The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education
Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-01-01
Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759
Inouye, Joshua M; Lin, Kant Y; Perry, Jamie L; Blemker, Silvia S
2016-02-01
The convexity of the dorsal surface of the velum is critical for normal velopharyngeal (VP) function and is largely attributed to the levator veli palatini (LVP) and musculus uvulae (MU). Studies have correlated a concave or flat nasal velar surface to symptoms of VP dysfunction including hypernasality and nasal air emission. In the context of surgical repair of cleft palates, the MU has been given relatively little attention in the literature compared with the larger LVP. A greater understanding of the mechanics of the MU will provide insight into understanding the influence of a dysmorphic MU, as seen in cleft palate, as it relates to VP function. The purpose of this study was to quantify the contributions of the MU to VP closure in a computational model. We created a novel 3-dimensional (3D) finite element model of the VP mechanism from magnetic resonance imaging data collected from an individual with healthy noncleft VP anatomy. The model components included the velum, posterior pharyngeal wall (PPW), LVP, and MU. Simulations were based on the muscle and soft tissue mechanical properties from the literature. We found that, similar to previous hypotheses, the MU acts as (i) a space-occupying structure and (ii) a velar extensor. As a space-occupying structure, the MU helps to nearly triple the midline VP contact length. As a velar extensor, the MU acting alone without the LVP decreases the VP distance 62%. Furthermore, activation of the MU decreases the LVP activation required for closure almost 3-fold, from 20% (without MU) to 8% (with MU). Our study suggests that any possible salvaging and anatomical reconstruction of viable MU tissue in a cleft patient may improve VP closure due to its mechanical function. In the absence or dysfunction of MU tissue, implantation of autologous or engineered tissues at the velar midline, as a possible substitute for the MU, may produce a geometric convexity more favorable to VP closure. In the future, more complex models will
Werner, Heron; Rolo, Liliam Cristine; Araujo Júnior, Edward; Dos Santos, Jorge Roberto Lopes
2014-03-01
Technological innovations accompanying advances in medicine have given rise to the possibility of obtaining better-defined fetal images that assist in medical diagnosis and contribute toward genetic counseling offered to parents during the prenatal period. In this article, we show our innovative experience of diagnosing fetal malformations through correlating 3-dimensional ultrasonography, magnetic resonance imaging, and computed tomography, which are accurate techniques for fetal assessment, with a fetal image reconstruction technique to create physical fetal models. PMID:24901782
3-Dimensional Topographic Models for the Classroom
NASA Technical Reports Server (NTRS)
Keller, J. W.; Roark, J. H.; Sakimoto, S. E. H.; Stockman, S.; Frey, H. V.
2003-01-01
We have recently undertaken a program to develop educational tools using 3-dimensional solid models of digital elevation data acquired by the Mars Orbital Laser Altimeter (MOLA) for Mars as well as a variety of sources for elevation data of the Earth. This work is made possible by the use of rapid prototyping technology to construct solid 3-Dimensional models of science data. We recently acquired rapid prototyping machine that builds 3-dimensional models in extruded plastic. While the machine was acquired to assist in the design and development of scientific instruments and hardware, it is also fully capable of producing models of spacecraft remote sensing data. We have demonstrated this by using Mars Orbiter Laser Altimeter (MOLA) topographic data and Earth based topographic data to produce extruded plastic topographic models which are visually appealing and instantly engage those who handle them.
NASA Technical Reports Server (NTRS)
Gibson, S. G.
1983-01-01
A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.
Incorporating 3-dimensional models in online articles
Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz
2015-01-01
Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can
From 2-dimensional cephalograms to 3-dimensional computed tomography scans.
Halazonetis, Demetrios J
2005-05-01
Computed tomography is entering the orthodontic specialty as a mainstream diagnostic modality. Radiation exposure and cost have decreased significantly, and the diagnostic value is very high compared with traditional radiographic options. However, 3-dimensional data present new challenges and need a different approach from traditional viewing of static images to make the most of the available possibilities. Advances in computer hardware and software now enable interactive display of the data on personal computers, with the ability to selectively view soft or hard tissues from any angle. Transfer functions are used to apply transparency and color. Cephalometric measurements can be taken by digitizing points in 3-dimensional coordinates. Application of 3-dimensional data is expected to increase significantly soon and might eventually replace many conventional orthodontic records that are in use today. PMID:15877045
NASA Astrophysics Data System (ADS)
Bachche, Shivaji; Oka, Koichi
2013-06-01
This paper presents the comparative study of various color space models to determine the suitable color space model for detection of green sweet peppers. The images were captured by using CCD cameras and infrared cameras and processed by using Halcon image processing software. The LED ring around the camera neck was used as an artificial lighting to enhance the feature parameters. For color images, CieLab, YIQ, YUV, HSI and HSV whereas for infrared images, grayscale color space models were selected for image processing. In case of color images, HSV color space model was found more significant with high percentage of green sweet pepper detection followed by HSI color space model as both provides information in terms of hue/lightness/chroma or hue/lightness/saturation which are often more relevant to discriminate the fruit from image at specific threshold value. The overlapped fruits or fruits covered by leaves can be detected in better way by using HSV color space model as the reflection feature from fruits had higher histogram than reflection feature from leaves. The IR 80 optical filter failed to distinguish fruits from images as filter blocks useful information on features. Computation of 3D coordinates of recognized green sweet peppers was also conducted in which Halcon image processing software provides location and orientation of the fruits accurately. The depth accuracy of Z axis was examined in which 500 to 600 mm distance between cameras and fruits was found significant to compute the depth distance precisely when distance between two cameras maintained to 100 mm.
Unification of color postprocessing techniques for 3-dimensional computational mechanics
NASA Technical Reports Server (NTRS)
Bailey, Bruce Charles
1985-01-01
To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.
Computer-assisted 3-dimensional anthropometry of the scaphoid.
Pichler, Wolfgang; Windisch, Gunther; Schaffler, Gottfried; Heidari, Nima; Dorr, Katrin; Grechenig, Wolfgang
2010-02-01
Scaphoid fracture fixation using a cannulated headless compression screw and the Matti-Russe procedure for the treatment of scaphoid nonunions are performed routinely. Surgeons performing these procedures need to be familiar with the anatomy of the scaphoid. A literature review reveals relatively few articles on this subject. The goal of this anatomical study was to measure the scaphoid using current technology and to discuss the findings with respect to the current, relevant literature.Computed tomography scans of 30 wrists were performed using a 64-slice SOMATOM Sensation CT system (resolution 0.6 mm) (Siemens Medical Solutions Inc, Malvern, Pennsylvania). Three-dimensional reconstructions from the raw data were generated by MIMICS software (Materialise, Leuven, Belgium). The scaphoid had a mean length of 26.0 mm (range, 22.3-30.7 mm), and men had a significantly longer (P<.001) scaphoid than women (27.861.6 mm vs 24.561.6 mm, respectively). The width and height were measured at 3 different levels for volume calculations, resulting in a mean volume of 3389.5 mm(3). Men had a significantly larger (P<.001) scaphoid volume than women (4057.86740.7 mm(3) vs 2846.56617.5 mm(3), respectively).We found considerable variation in the length and volume of the scaphoid in our cohort. We also demonstrated a clear correlation between scaphoid size and sex. Surgeons performing operative fixation of scaphoid fractures and corticocancellous bone grafting for nonunions need to be familiar with these anatomical variations. PMID:20192143
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents
MT3D: a 3 dimensional magnetotelluric modeling program (user's guide and documentation for Rev. 1)
Nutter, C.; Wannamaker, P.E.
1980-11-01
MT3D.REV1 is a non-interactive computer program written in FORTRAN to do 3-dimensional magnetotelluric modeling. A 3-D volume integral equation has been adapted to simulate the MT response of a 3D body in the earth. An integro-difference scheme has been incorporated to increase the accuracy. This is a user's guide for MT3D.REV1 on the University of Utah Research Institute's (UURI) PRIME 400 computer operating under PRIMOS IV, Rev. 17.
Particle trajectory computation on a 3-dimensional engine inlet. Final Report Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, J. J.
1986-01-01
A 3-dimensional particle trajectory computer code was developed to compute the distribution of water droplet impingement efficiency on a 3-dimensional engine inlet. The computed results provide the essential droplet impingement data required for the engine inlet anti-icing system design and analysis. The droplet trajectories are obtained by solving the trajectory equation using the fourth order Runge-Kutta and Adams predictor-corrector schemes. A compressible 3-D full potential flow code is employed to obtain a cylindrical grid definition of the flowfield on and about the engine inlet. The inlet surface is defined mathematically through a system of bi-cubic parametric patches in order to compute the droplet impingement points accurately. Analysis results of the 3-D trajectory code obtained for an axisymmetric droplet impingement problem are in good agreement with NACA experimental data. Experimental data are not yet available for the engine inlet impingement problem analyzed. Applicability of the method to solid particle impingement problems, such as engine sand ingestion, is also demonstrated.
Using 3-dimensional printing to create presurgical models for endodontic surgery.
Bahcall, James K
2014-09-01
Advances in endodontic surgery--from both a technological and procedural perspective-have been significant over the last 18 years. Although these technologies and procedural enhancements have significantly improved endodontic surgical treatment outcomes, there is still an ongoing challenge of overcoming the limitations of interpreting preoperative 2-dimensional (2-D) radiographic representation of a 3-dimensional (3-D) in vivo surgical field. Cone-beam Computed Tomography (CBCT) has helped to address this issue by providing a 3-D enhancement of the 2-D radiograph. The next logical step to further improve a presurgical case 3-D assessment is to create a surgical model from the CBCT scan. The purpose of this article is to introduce 3-D printing of CBCT scans for creating presurgical models for endodontic surgery. PMID:25197746
Gálvez, Jorge A; Gralewski, Kevin; McAndrew, Christine; Rehman, Mohamed A; Chang, Benjamin; Levin, L Scott
2016-03-01
Children are not typically considered for hand transplantation for various reasons, including the difficulty of finding an appropriate donor. Matching donor-recipient hands and forearms based on size is critically important. If the donor's hands are too large, the recipient may not be able to move the fingers effectively. Conversely, if the donor's hands are too small, the appearance may not be appropriate. We present an 8-year-old child evaluated for a bilateral hand transplant following bilateral amputation. The recipient forearms and model hands were modeled from computed tomography imaging studies and replicated as anatomic models with a 3-dimensional printer. We modified the scale of the printed hand to produce 3 proportions, 80%, 100% and 120%. The transplant team used the anatomical models during evaluation of a donor for appropriate match based on size. The donor's hand size matched the 100%-scale anatomical model hand and the transplant team was activated. In addition to assisting in appropriate donor selection by the transplant team, the 100%-scale anatomical model hand was used to create molds for prosthetic hands for the donor. PMID:26810827
NASA Astrophysics Data System (ADS)
Yang, Wenming; An, Hui; Amin, Maghbouli; Li, Jing
2014-11-01
A 3-dimensional computational fluid dynamics modeling is conducted on a direct injection diesel engine fueled by biodiesel using multi-dimensional software KIVA4 coupled with CHEMKIN. To accurately predict the oxidation of saturated and unsaturated agents of the biodiesel fuel, a multicomponent advanced combustion model consisting of 69 species and 204 reactions combined with detailed oxidation pathways of methyl decenoate (C11H22O2), methyl-9-decenoate (C11H20O2) and n-heptane (C7H16) is employed in this work. In order to better represent the real fuel properties, the detailed chemical and thermo-physical properties of biodiesel such as vapor pressure, latent heat of vaporization, liquid viscosity and surface tension were calculated and compiled into the KIVA4 fuel library. The nitrogen monoxide (NO) and carbon monoxide (CO) formation mechanisms were also embedded. After validating the numerical simulation model by comparing the in-cylinder pressure and heat release rate curves with experimental results, further studies have been carried out to investigate the effect of combustion chamber design on flow field, subsequently on the combustion process and performance of diesel engine fueled by biodiesel. Research has also been done to investigate the impact of fuel injector location on the performance and emissions formation of diesel engine.
3-dimensional orthodontics visualization system with dental study models and orthopantomograms
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ong, S. H.; Foong, K. W. C.; Dhar, T.
2005-04-01
The aim of this study is to develop a system that provides 3-dimensional visualization of orthodontic treatments. Dental plaster models and corresponding orthopantomogram (dental panoramic tomogram) are first digitized and fed into the system. A semi-auto segmentation technique is applied to the plaster models to detect the dental arches, tooth interstices and gum margins, which are used to extract individual crown models. 3-dimensional representation of roots, generated by deforming generic tooth models with orthopantomogram using radial basis functions, is attached to corresponding crowns to enable visualization of complete teeth. An optional algorithm to close the gaps between deformed roots and actual crowns by using multi-quadratic radial basis functions is also presented, which is capable of generating smooth mesh representation of complete 3-dimensional teeth. User interface is carefully designed to achieve a flexible system with as much user friendliness as possible. Manual calibration and correction is possible throughout the data processing steps to compensate occasional misbehaviors of automatic procedures. By allowing the users to move and re-arrange individual teeth (with their roots) on a full dentition, this orthodontic visualization system provides an easy and accurate way of simulation and planning of orthodontic treatment. Its capability of presenting 3-dimensional root information with only study models and orthopantomogram is especially useful for patients who do not undergo CT scanning, which is not a routine procedure in most orthodontic cases.
The aim of this project is to develop three-dimensional computer simulations for aerosol transport and deposition in human respiratory tract. Three-dimensional CFPD (computational fluid and particle dynamics) modeling is a powerful tool to obtain microscopic dose information at l...
Creating 3-dimensional Models of the Photosphere using the SIR Code
NASA Astrophysics Data System (ADS)
Thonhofer, S.; Utz, D.; Jurčák, J.; Pauritsch, J.; Hanslmeier, A.; Lemmerer, B.
A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of magnetic features such as sunspots, pores or smaller elements like single flux tubes seen as magnetic bright points. The SIR code is an advanced inversion code that retrieves physical quantities, e.g. magnetic field, from Stokes profiles. Based on this code, we developed a program for automated inversion of Hinode SOT/SP data and for storing these results in 3-dimensional data cubes in the form of fits files. We obtained models of the temperature, magnetic field strength, magnetic field angles and LOS-velocity in a region of the quiet sun. We will give a first discussion of those parameters in regards of small scale magnetic fields and what we can obtain and learn in the future.
3-Dimensional Marine CSEM Modeling by Employing TDFEM with Parallel Solvers
NASA Astrophysics Data System (ADS)
Wu, X.; Yang, T.
2013-12-01
In this paper, parallel fulfillment is developed for forward modeling of the 3-Dimensional controlled source electromagnetic (CSEM) by using time-domain finite element method (TDFEM). Recently, a greater attention rises on research of hydrocarbon (HC) reservoir detection mechanism in the seabed. Since China has vast ocean resources, seeking hydrocarbon reservoirs become significant in the national economy. However, traditional methods of seismic exploration shown a crucial obstacle to detect hydrocarbon reservoirs in the seabed with a complex structure, due to relatively high acquisition costs and high-risking exploration. In addition, the development of EM simulations typically requires both a deep knowledge of the computational electromagnetics (CEM) and a proper use of sophisticated techniques and tools from computer science. However, the complexity of large-scale EM simulations often requires large memory because of a large amount of data, or solution time to address problems concerning matrix solvers, function transforms, optimization, etc. The objective of this paper is to present parallelized implementation of the time-domain finite element method for analysis of three-dimensional (3D) marine controlled source electromagnetic problems. Firstly, we established a three-dimensional basic background model according to the seismic data, then electromagnetic simulation of marine CSEM was carried out by using time-domain finite element method, which works on a MPI (Message Passing Interface) platform with exact orientation to allow fast detecting of hydrocarbons targets in ocean environment. To speed up the calculation process, SuperLU of an MPI (Message Passing Interface) version called SuperLU_DIST is employed in this approach. Regarding the representation of three-dimension seabed terrain with sense of reality, the region is discretized into an unstructured mesh rather than a uniform one in order to reduce the number of unknowns. Moreover, high-order Whitney
Zopf, David A.; Mitsak, Anna G.; Flanagan, Colleen L.; Wheeler, Matthew; Green, Glenn E.; Hollister, Scott J.
2016-01-01
Objectives To determine the potential of integrated image-based Computer Aided Design (CAD) and 3D printing approach to engineer scaffolds for head and neck cartilaginous reconstruction for auricular and nasal reconstruction. Study Design Proof of concept revealing novel methods for bioscaffold production with in vitro and in vivo animal data. Setting Multidisciplinary effort encompassing two academic institutions. Subjects and Methods DICOM CT images are segmented and utilized in image-based computer aided design to create porous, anatomic structures. Bioresorbable, polycaprolactone scaffolds with spherical and random porous architecture are produced using a laser-based 3D printing process. Subcutaneous in vivo implantation of auricular and nasal scaffolds was performed in a porcine model. Auricular scaffolds were seeded with chondrogenic growth factors in a hyaluronic acid/collagen hydrogel and cultured in vitro over 2 months duration. Results Auricular and nasal constructs with several microporous architectures were rapidly manufactured with high fidelity to human patient anatomy. Subcutaneous in vivo implantation of auricular and nasal scaffolds resulted in excellent appearance and complete soft tissue ingrowth. Histologic analysis of in vitro scaffolds demonstrated native appearing cartilaginous growth respecting the boundaries of the scaffold. Conclusions Integrated image-based computer-aided design (CAD) and 3D printing processes generated patient-specific nasal and auricular scaffolds that supported cartilage regeneration. PMID:25281749
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
Maschio, Federico; Pandya, Mirali; Olszewski, Raphael
2016-01-01
BACKGROUND The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. MATERIAL AND METHODS Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. RESULTS The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm CONCLUSIONS Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456
Maschio, Federico; Pandya, Mirali; Olszewski, Raphael
2016-01-01
Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456
Swanson, Jordan W.; Mitchell, Brianne T.; Wink, Jason A.; Taylor, Jesse A.
2016-01-01
Background: Grading systems of the mandibular deformity in craniofacial microsomia (CFM) based on conventional radiographs have shown low interrater reproducibility among craniofacial surgeons. We sought to design and validate a classification based on 3-dimensional CT (3dCT) that correlates features of the deformity with surgical treatment. Methods: CFM mandibular deformities were classified as normal (T0), mild (hypoplastic, likely treated with orthodontics or orthognathic surgery; T1), moderate (vertically deficient ramus, likely treated with distraction osteogenesis; T2), or severe (ramus rudimentary or absent, with either adequate or inadequate mandibular body bone stock; T3 and T4, likely treated with costochondral graft or free fibular flap, respectively). The 3dCT face scans of CFM patients were randomized and then classified by craniofacial surgeons. Pairwise agreement and Fleiss' κ were used to assess interrater reliability. Results: The 3dCT images of 43 patients with CFM (aged 0.1–15.8 years) were reviewed by 15 craniofacial surgeons, representing an average 15.2 years of experience. Reviewers demonstrated fair interrater reliability with average pairwise agreement of 50.4 ± 9.9% (Fleiss' κ = 0.34). This represents significant improvement over the Pruzansky–Kaban classification (pairwise agreement, 39.2%; P = 0.0033.) Reviewers demonstrated substantial interrater reliability with average pairwise agreement of 83.0 ± 7.6% (κ = 0.64) distinguishing deformities requiring graft or flap reconstruction (T3 and T4) from others. Conclusion: The proposed classification, designed for the era of 3dCT, shows improved consensus with respect to stratifying the severity of mandibular deformity and type of operative management. PMID:27104097
Role of the Animator in the Generation of 3-Dimensional Computer Generated Animation.
ERIC Educational Resources Information Center
Wedge, John Christian
This master's thesis investigates the relationship between the traditional animator and the computer as computer animation systems allow them to apply traditional skills with a high degree of success. The advantages and disadvantages of traditional animation as a medium for expressing motion and character are noted, and it is argued that the…
Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies
NASA Technical Reports Server (NTRS)
Reyhner, T. A.
1982-01-01
An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.
NASA Astrophysics Data System (ADS)
Kushner, Mark J.; Grapperhaus, Michael J.
1996-10-01
Inductively Coupled Plasma (ICP) reactors have the potential for scaling to large area substrates while maintaining azimuthal symmetry or side-to-side uniformity across the wafer. Asymmetric etch properties in these devices have been attributed to transmission line properties of the coil, internal structures (such as wafer clamps) and non-uniform gas injection or pumping. To investigate the origins of asymmetric etch properties, a 3-dimensional hybrid model has been developed. The hybrid model contains electromagnetic, electric circuit, electron energy equation, and fluid modules. Continuity and momentum equations are solved in the fluid module along with Poisson's equation. We will discuss results for ion and radical flux uniformity to the substrate while varying the transmission line characteristics of the coil, symmetry of gas inlets/pumping, and internal structures. Comparisons will be made to expermental measurements of etch rates. ^*Work supported by SRC, NSF, ARPA/AFOSR and LAM Research.
NASA Astrophysics Data System (ADS)
Gumus, Kutalmis; Erkaya, Halil
2013-04-01
In Terrestrial laser scanning (TLS) applications, it is necessary to take into consideration the conditions that affect the scanning process, especially the general characteristics of the laser scanner, geometric properties of the scanned object (shape, size, etc.), and its spatial location in the environment. Three dimensional models obtained with TLS, allow determining the geometric features and relevant magnitudes of the scanned object in an indirect way. In order to compare the spatial location and geometric accuracy of the 3-dimensional model created by Terrestrial laser scanning, it is necessary to use measurement tools that give more precise results than TLS. Geometric comparisons are performed by analyzing the differences between the distances, the angles between surfaces and the measured values taken from cross-sections between the data from the 3-dimensional model created with TLS and the values measured by other measurement devices The performance of the scanners, the size and shape of the scanned objects are tested using reference objects the sizes of which are determined with high precision. In this study, the important points to consider when choosing reference objects were highlighted. The steps up to processing the point clouds collected by scanning, regularizing these points and modeling in 3 dimensions was presented visually. In order to test the geometric correctness of the models obtained by Terrestrial laser scanners, sample objects with simple geometric shapes such as cubes, rectangular prisms and cylinders that are made of concrete were used as reference models. Three dimensional models were generated by scanning these reference models with Trimble Mensi GS 100. The dimension of the 3D model that is created from point clouds was compared with the precisely measured dimensions of the reference objects. For this purpose, horizontal and vertical cross-sections were taken from the reference objects and generated 3D models and the proximity of
Siler, Drew L; Faulds, James E; Mayhew, Brett
2013-04-16
Geothermal systems in the Great Basin, USA, are controlled by a variety of fault intersection and fault interaction areas. Understanding the specific geometry of the structures most conducive to broad-scale geothermal circulation is crucial to both the mitigation of the costs of geothermal exploration (especially drilling) and to the identification of geothermal systems that have no surface expression (blind systems). 3-dimensional geologic modeling is a tool that can elucidate the specific stratigraphic intervals and structural geometries that host geothermal reservoirs. Astor Pass, NV USA lies just beyond the northern extent of the dextral Pyramid Lake fault zone near the boundary between two distinct structural domains, the Walker Lane and the Basin and Range, and exhibits characteristics of each setting. Both northwest-striking, left-stepping dextral faults of the Walker Lane and kinematically linked northerly striking normal faults associated with the Basin and Range are present. Previous studies at Astor Pass identified a blind geothermal system controlled by the intersection of west-northwest and north-northwest striking dextral-normal faults. Wells drilled into the southwestern quadrant of the fault intersection yielded 94°C fluids, with geothermometers suggesting a maximum reservoir temperature of 130°C. A 3-dimensional model was constructed based on detailed geologic maps and cross-sections, 2-dimensional seismic data, and petrologic analysis of the cuttings from three wells in order to further constrain the structural setting. The model reveals the specific geometry of the fault interaction area at a level of detail beyond what geologic maps and cross-sections can provide.
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
A 3-dimensional DTI MRI-based model of GBM growth and response to radiation therapy.
Hathout, Leith; Patel, Vishal; Wen, Patrick
2016-09-01
Glioblastoma (GBM) is both the most common and the most aggressive intra-axial brain tumor, with a notoriously poor prognosis. To improve this prognosis, it is necessary to understand the dynamics of GBM growth, response to treatment and recurrence. The present study presents a mathematical diffusion-proliferation model of GBM growth and response to radiation therapy based on diffusion tensor (DTI) MRI imaging. This represents an important advance because it allows 3-dimensional tumor modeling in the anatomical context of the brain. Specifically, tumor infiltration is guided by the direction of the white matter tracts along which glioma cells infiltrate. This provides the potential to model different tumor growth patterns based on location within the brain, and to simulate the tumor's response to different radiation therapy regimens. Tumor infiltration across the corpus callosum is simulated in biologically accurate time frames. The response to radiation therapy, including changes in cell density gradients and how these compare across different radiation fractionation protocols, can be rendered. Also, the model can estimate the amount of subthreshold tumor which has extended beyond the visible MR imaging margins. When combined with the ability of being able to estimate the biological parameters of invasiveness and proliferation of a particular GBM from serial MRI scans, it is shown that the model has potential to simulate realistic tumor growth, response and recurrence patterns in individual patients. To the best of our knowledge, this is the first presentation of a DTI-based GBM growth and radiation therapy treatment model. PMID:27572745
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Interactive 3-dimensional segmentation of MRI data in personal computer environment.
Yoo, S S; Lee, C U; Choi, B G; Saiviroonporn, P
2001-11-15
We describe a method of interactive three-dimensional segmentation and visualization for anatomical magnetic resonance imaging (MRI) data in a personal computer environment. The visual feedback necessary during 3-D segmentation was provided by a ray casting algorithm, which was designed to allow users to interactively decide the visualization quality depending on the task-requirement. Structures such as gray matter, white matter, and facial skin from T1-weighted high-resolution MRI data were segmented and later visualized with surface rendering. Personal computers with central processing unit (CPU) speeds of 266, 400, and 700 MHz, were used for the implementation. The 3-D visualization upon each execution of the segmentation operation was achieved in the order of 2 s with a 700 MHz CPU. Our results suggest that 3-D volume segmentation with semi real-time visual feedback could be effectively implemented in a PC environment without the need for dedicated graphics processing hardware. PMID:11640960
A 3-dimensional Navier-Stokes-Euler code for blunt-body flow computations
NASA Technical Reports Server (NTRS)
Li, C. P.
1985-01-01
The shock-layer flowfield is obtained with or without viscous and heat-conducting dissipations from the conservative laws of fluid dynamics equations using a shock-fitting implicity finite-difference technique. The governing equations are cast in curvilinear-orthogonal coordinates and transformed to the domain between the shock and the body. Another set of equations is used for the singular coordinate axis, which, together with a cone generator away from the stagnation point, encloses the computation domain. A time-dependent alternating direction implicit factorization technique is applied to integrate the equations with local-time increment until a steady solution is reached. The shock location is updated after the flowfield computation, but the wall conditions are implemented into the implicit procedure. Innovative procedures are introduced to define the initial flowfield, to treat both perfect and equilibrium gases, to advance the solution on a coarse-to-fine grid sequence, and to start viscous flow computations from their corresponding inviscid solutions. The results are obtained from a grid no greater than 28 by 18 by 7 and converged within 300 integration steps. They are of sufficient accuracy to start parabolized Navier-Stokes or Euler calculations beyond the nose region, to compare with flight and wind-tunnel data, and to evaluate conceptual designs of reentry spacecraft.
2015-01-01
Summary This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments. PMID:26734515
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
3-Dimensional Modeling of Capacitively and Inductively Coupled Plasma Etching Systems
NASA Astrophysics Data System (ADS)
Rauf, Shahid
2008-10-01
Low temperature plasmas are widely used for thin film etching during micro and nano-electronic device fabrication. Fluid and hybrid plasma models were developed 15-20 years ago to understand the fundamentals of these plasmas and plasma etching. These models have significantly evolved since then, and are now a major tool used for new plasma hardware design and problem resolution. Plasma etching is a complex physical phenomenon, where inter-coupled plasma, electromagnetic, fluid dynamics, and thermal effects all have a major influence. The next frontier in the evolution of fluid-based plasma models is where these models are able to self-consistently treat the inter-coupling of plasma physics with fluid dynamics, electromagnetics, heat transfer and magnetostatics. We describe one such model in this paper and illustrate its use in solving engineering problems of interest for next generation plasma etcher design. Our 3-dimensional plasma model includes the full set of Maxwell equations, transport equations for all charged and neutral species in the plasma, the Navier-Stokes equation for fluid flow, and Kirchhoff's equations for the lumped external circuit. This model also includes Monte Carlo based kinetic models for secondary electrons and stochastic heating, and can take account of plasma chemistry. This modeling formalism allows us to self-consistently treat the dynamics in commercial inductively and capacitively coupled plasma etching reactors with realistic plasma chemistries, magnetic fields, and reactor geometries. We are also able to investigate the influence of the distributed electromagnetic circuit at very high frequencies (VHF) on the plasma dynamics. The model is used to assess the impact of azimuthal asymmetries in plasma reactor design (e.g., off-center pump, 3D magnetic field, slit valve, flow restrictor) on plasma characteristics at frequencies from 2 -- 180 MHz. With Jason Kenney, Ankur Agarwal, Ajit Balakrishna, Kallol Bera, and Ken Collins.
NASA Astrophysics Data System (ADS)
Goldstein, L.; Prasher, S. O.; Ghoshal, S.
2004-05-01
Non-aqueous phase liquids (NAPLs), if spilled into the subsurface, will migrate downward, and a significant fraction will become trapped in the soil matrix. These trapped NAPL globules partition into the water and/or vapor phase, and serve as continuous sources of contamination (e.g. source zones). At present, the presence of NAPL in the subsurface is typically inferred from chemical analysis data. There are no accepted methodologies or protocols available for the direct characterization of NAPLs in the subsurface. Proven and cost-effective methodologies are needed to allow effective implementation of remediation technologies at NAPL contaminated sites. X-ray Computed Tomography (CT) has the potential to non-destructively quantify NAPL mass and distribution in soil cores due to this technology's ability to detect small atomic density differences of solid, liquid, gas, and NAPL phases present in a representative volume element. We have demonstrated that environmentally significant NAPLs, such as gasoline and other oil products, chlorinated solvents, and PCBs possess a characteristic and predictable X-ray attenuation coefficient that permits their quantification in porous media at incident beam energies, typical of medical and industrial X-ray CT scanners. As part of this study, methodologies were developed for generating and analyzing X-ray CT data for the study of NAPLs in natural porous media. Columns of NAPL-contaminated soils were scanned, flushed with solvents and water to remove entrapped NAPL, and re-scanned. X-ray CT data was analyzed to obtain numerical arrays of soil porosity, NAPL saturation, and NAPL volume at a spatial resolution of 1 mm. This methodology was validated using homogeneous and heterogeneous soil columns with known quantities of gasoline and tetrachloroethylene. NAPL volumes computed using X-ray CT data was compared with known volumes from volume balance calculations. Error analysis revealed that in a 5 cm long and 2.5 cm diameter soil
An Explicit 3-Dimensional Model for Reactive Transport of Nitrogen in Tile Drained Fields
NASA Astrophysics Data System (ADS)
Hill, D. J.; Valocchi, A. J.; Hudson, R. J.
2001-12-01
Recently, there has been increased interest in nitrate contamination of groundwater in the Midwest because of its link to surface water eutrophication, especially in the Gulf of Mexico. The vast majority of this nitrate is the product of biologically mediated transformation of fertilizers containing ammonia in the vadose zone of agricultural fields. For this reason, it is imperative that mathematical models, which can serve as useful tools to evaluate both the impact of agricultural fertilizer applications and nutrient-reducing management practices, are able to specifically address transport in the vadose zone. The development of a 3-dimensional explicit numerical model to simulate the movement and transformation of nitrogen species through the subsurface on the scale of an individual farm plot will be presented. At this scale, nitrogen fate and transport is controlled by a complex coupling among hydrologic, agricultural and biogeochemical processes. The nitrogen model is a component of a larger modeling effort that focuses upon conditions typical of those found in agricultural fields in Illinois. These conditions include non-uniform, multi-dimensional, transient flow in both saturated and unsaturated zones, geometrically complex networks of tile drains, coupled surface-subsurface-tile flow, and dynamic levels of dissolved oxygen in the soil profile. The advection-dispersion-reaction equation is solved using an operator-splitting approach, which is a flexible and straightforward strategy. Advection is modeled using a total variation diminishing scheme, dispersion is modeled using an alternating direction explicit method, and reactions are modeled using rate law equations. The model's stability and accuracy will be discussed, and test problems will be presented.
Ku, L.P.; Kolibal, J.G.; Liew, S.L.
1985-09-01
The computational models of the TFTR constructed for the radiation transport analysis for the Q approx. 1 demonstration are summarized and reviewed. These models can be characterized by the dimensionality required to describe the geometry, and by the numerical methods of solving the transport equation. Results obtained with these models in the test cell are compared and discussed.
Andriani, Frank; Garfield, Jackie; Fusenig, Norbert E; Garlick, Jonathan A
2004-01-20
We have developed novel 3-dimensional in vitro and in vivo tissue models that mimic premalignant disease of human stratified epithelium in order to analyze the stromal contribution of extracellular matrix and basement membrane proteins to the progression of intraepithelial neoplasia. Three-dimensional, organotypic cultures were grown either on a de-epidermalized human dermis with pre-existing basement membrane components on its surface (AlloDerm), on a Type I collagen gel that lacked basement membrane proteins or on polycarbonate membranes coated with purified extracellular matrix proteins. When tumor cells (HaCaT-II4) were mixed with normal keratinocytes (4:1/normals:HaCaT-II4), tumor cells selectively attached, persisted and proliferated at the dermal-epidermal interface in vitro and generated dysplastic tissues when transplanted to nude mice only when grown in the presence of the AlloDerm substrate. This stromal interface was permissive for tumor cell attachment due to the rapid assembly of structured basement membrane. When tumor cells were mixed with normal keratinocytes and grown on polycarbonate membranes coated with individual extracellular matrix or basement membrane components, selective attachment and significant intraepithelial expansion occurred only on laminin 1 and Type IV collagen-coated membranes. This preferential adhesion of tumor cells restricted the synthesis of laminin 5 to basal cells where it was deposited in a polarized distribution. Western blot analysis revealed that tumor cell attachment was not due to differences in the synthesis or processing of laminin 5. Thus, intraepithelial progression towards premalignant disease is dependent on the selective adhesion of cells with malignant potential to basement membrane proteins that provide a permissive template for their persistence and expansion. PMID:14648700
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2001-12-01
We present preliminary results from a 3-dimensional fault interaction model, with the fault system specified by the geometry and tectonics of the San Andreas Fault (SAF) system. We use the forward model for earthquake generation on interacting faults of Fitzenz and Miller [2001] that incorporates the analytical solutions of Okada [85,92], GPS-constrained tectonic loading, creep compaction and frictional dilatancy [Sleep and Blanpied, 1994, Sleep, 1995], and undrained poro-elasticity. The model fault system is centered at the Big Bend, and includes three large strike-slip faults (each discretized into multiple subfaults); 1) a 300km, right-lateral segment of the SAF to the North, 2) a 200km-long left-lateral segment of the Garlock fault to the East, and 3) a 100km-long right-lateral segment of the SAF to the South. In the initial configuration, three shallow-dipping faults are also included that correspond to the thrust belt sub-parallel to the SAF. Tectonic loading is decomposed into basal shear drag parallel to the plate boundary with a 35mm yr-1 plate velocity, and East-West compression approximated by a vertical dislocation surface applied at the far-field boundary resulting in fault-normal compression rates in the model space about 4mm yr-1. Our aim is to study the long-term seismicity characteristics, tectonic evolution, and fault interaction of this system. We find that overpressured faults through creep compaction are a necessary consequence of the tectonic loading, specifically where high normal stress acts on long straight fault segments. The optimal orientation of thrust faults is a function of the strike-slip behavior, and therefore results in a complex stress state in the elastic body. This stress state is then used to generate new fault surfaces, and preliminary results of dynamically generated faults will also be presented. Our long-term aim is to target measurable properties in or around fault zones, (e.g. pore pressures, hydrofractures, seismicity
Pashazadeh, Saeid; Sharifi, Mohsen
2009-01-01
Existing 3-dimensional acoustic target tracking methods that use wired/wireless networked sensor nodes to track targets based on four sensing coverage do not always compute the feasible spatio-temporal information of target objects. To investigate this discrepancy in a formal setting, we propose a geometric model of the target tracking problem alongside its equivalent geometric dual model that is easier to solve. We then study and prove some properties of dual model by exploiting its relationship with algebra. Based on these properties, we propose a four coverage axis line method based on four sensing coverage and prove that four sensing coverage always yields two dual correct answers; usually one of them is infeasible. By showing that the feasible answer can be only sometimes identified by using a simple time test method such as the one proposed by ourselves, we prove that four sensing coverage fails to always yield the feasible spatio-temporal information of a target object. We further prove that five sensing coverage always gives the feasible position of a target object under certain conditions that are discussed in this paper. We propose three extensions to four coverage axis line method, namely, five coverage extent point method, five coverage extended axis lines method, and five coverage redundant axis lines method. Computation and time complexities of all four proposed methods are equal in the worst cases as well as on average being equal to Θ(1) each. Proposed methods and proved facts about capabilities of sensing coverage degree in this paper can be used in all other methods of acoustic target tracking like Bayesian filtering methods. PMID:22423198
Application of 3-Dimensional Printing Technology to Construct an Eye Model for Fundus Viewing Study
Li, Xinhua; Gao, Zhishan; Yuan, Dongqing; Liu, Qinghuai
2014-01-01
Objective To construct a life-sized eye model using the three-dimensional (3D) printing technology for fundus viewing study of the viewing system. Methods We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs). Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD) software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D) printing. Together with the main printed structure, polymethyl methacrylate(PMMA) aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy. Results In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro's and Bakaraju's model eye, much smaller than Arianpour's model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro's eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value. Conclusions The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus
An approximate single fluid 3-dimensional magnetohydrodynamic equilibrium model with toroidal flow
NASA Astrophysics Data System (ADS)
Cooper, W. A.; Hirshman, S. P.; Chapman, I. T.; Brunetti, D.; Faustin, J. M.; Graves, J. P.; Pfefferlé, D.; Raghunathan, M.; Sauter, O.; Tran, T. M.; Aiba, N.
2014-09-01
An approximate model for a single fluid three-dimensional (3D) magnetohydrodynamic (MHD) equilibrium with pure isothermal toroidal flow with imposed nested magnetic flux surfaces is proposed. It recovers the rigorous toroidal rotation equilibrium description in the axisymmetric limit. The approximation is valid under conditions of nearly rigid or vanishing toroidal rotation in regions with significant 3D deformation of the equilibrium flux surfaces. Bifurcated helical core equilibrium simulations of long-lived modes in the MAST device demonstrate that the magnetic structure is only weakly affected by the flow but that the 3D pressure distortion is important. The pressure is displaced away from the major axis and therefore is not as noticeably helically deformed as the toroidal magnetic flux under the subsonic flow conditions measured in the experiment. The model invoked fails to predict any significant screening by toroidal plasma rotation of resonant magnetic perturbations in MAST free boundary computations.
[Rapid 3-Dimensional Models of Cerebral Aneurysm for Emergency Surgical Clipping].
Konno, Takehiko; Mashiko, Toshihiro; Oguma, Hirofumi; Kaneko, Naoki; Otani, Keisuke; Watanabe, Eiju
2016-08-01
We developed a method for manufacturing solid models of cerebral aneurysms, with a shorter printing time than that involved in conventional methods, using a compact 3D printer with acrylonitrile-butadiene-styrene(ABS)resin. We further investigated the application and utility of this printing system in emergency clipping surgery. A total of 16 patients diagnosed with acute subarachnoid hemorrhage resulting from cerebral aneurysm rupture were enrolled in the present study. Emergency clipping was performed on the day of hospitalization. Digital Imaging and Communication in Medicine(DICOM)data obtained from computed tomography angiography(CTA)scans were edited and converted to stereolithography(STL)file formats, followed by the production of 3D models of the cerebral aneurysm by using the 3D printer. The mean time from hospitalization to the commencement of surgery was 242 min, whereas the mean time required for manufacturing the 3D model was 67 min. The average cost of each 3D model was 194 Japanese Yen. The time required for manufacturing the 3D models shortened to approximately 1 hour with increasing experience of producing 3D models. Favorable impressions for the use of the 3D models in clipping were reported by almost all neurosurgeons included in this study. Although 3D printing is often considered to involve huge costs and long manufacturing time, the method used in the present study requires shorter time and lower costs than conventional methods for manufacturing 3D cerebral aneurysm models, thus making it suitable for use in emergency clipping. PMID:27506842
A 3-Dimensional Model of Water-Bearing Sequences in the Dominguez Gap Region, Long Beach, California
Ponti, Daniel J.; Ehman, Kenneth D.; Edwards, Brian D.; Tinsley, John C., III; Hildenbrand, Thomas; Hillhouse, John W.; Hanson, Randall T.; McDougall, Kristen; Powell, Charles L.; Wan, Elmira; Land, Michael; Mahan, Shannon; Sarna-Wojcicki, Andrei M.
2007-01-01
A 3-dimensional computer model of the Quaternary sequence stratigraphy in the Dominguez gap region of Long Beach, California has been developed to provide a robust chronostratigraphic framework for hydrologic and tectonic studies. The model consists of 13 layers within a 16.5 by 16.1 km (10.25 by 10 mile) square area and extends downward to an altitude of -900 meters (-2952.76 feet). Ten sequences of late Pliocene to Holocene age are identified and correlated within the model. Primary data to build the model comes from five reference core holes, extensive high-resolution seismic data obtained in San Pedro Bay, and logs from several hundred water and oil wells drilled in the region. The model is best constrained in the vicinity of the Dominguez gap seawater intrusion barrier where a dense network of subsurface data exist. The resultant stratigraphic framework and geologic structure differs significantly from what has been proposed in earlier studies. An important new discovery from this approach is the recognition of ongoing tectonic deformation throughout nearly all of Quaternary time that has impacted the geometry and character of the sequences. Anticlinal folding along a NW-SE trend, probably associated with Quaternary reactivation of the Wilmington anticline, has uplifted and thinned deposits along the fold crest, which intersects the Dominguez gap seawater barrier near Pacific Coast Highway. A W-NW trending fault system that approximately parallels the fold crest has also been identified. This fault progressively displaces all but the youngest sequences down to the north and serves as the southern termination of the classic Silverado aquifer. Uplift and erosion of fining-upward paralic sequences along the crest of the young fold has removed or thinned many of the fine-grained beds that serve to protect the underlying Silverado aquifer from seawater contaminated shallow groundwater. As a result of this process, the potential exists for vertical migration of
Yoon, Kyunghwan; Ko, Young Bae; Suh, Dae Chul
2013-01-01
We investigate the potentials and limitations of computational fluid dynamics (CFD) analysis of patient specific models from 3D angiographies. There are many technical problems in acquisition of proper vascular models, in pre-processing for making 2D surface and 3D volume meshes and also in post-processing steps for display the CFD analysis. We hope that our study could serves as a technical reference to validating other tools and CFD results. PMID:24024073
High fidelity 3-dimensional models of beam-electron cloud interactions in circular accelerators
NASA Astrophysics Data System (ADS)
Feiz Zarrin Ghalam, Ali
Electron cloud is a low-density electron profile created inside the vacuum chamber of circular machines with positively charged beams. Electron cloud limits the peak current of the beam and degrades the beams' quality through luminosity degradation, emittance growth and head to tail or bunch to bunch instability. The adverse effects of electron cloud on long-term beam dynamics becomes more and more important as the beams go to higher and higher energies. This problem has become a major concern in many future circular machines design like the Large Hadron Collider (LHC) under construction at European Center for Nuclear Research (CERN). Due to the importance of the problem several simulation models have been developed to model long-term beam-electron cloud interaction. These models are based on "single kick approximation" where the electron cloud is assumed to be concentrated at one thin slab around the ring. While this model is efficient in terms of computational costs, it does not reflect the real physical situation as the forces from electron cloud to the beam are non-linear contrary to this model's assumption. To address the existing codes limitation, in this thesis a new model is developed to continuously model the beam-electron cloud interaction. The code is derived from a 3-D parallel Particle-In-Cell (PIC) model (QuickPIC) originally used for plasma wakefield acceleration research. To make the original model fit into circular machines environment, betatron and synchrotron equations of motions have been added to the code, also the effect of chromaticity, lattice structure have been included. QuickPIC is then benchmarked against one of the codes developed based on single kick approximation (HEAD-TAIL) for the transverse spot size of the beam in CERN-LHC. The growth predicted by QuickPIC is less than the one predicted by HEAD-TAIL. The code is then used to investigate the effect of electron cloud image charges on the long-term beam dynamics, particularly on the
In vitro 3-dimensional tumor model for radiosensitivity of HPV positive OSCC cell lines
Zhang, Mei; Rose, Barbara; Lee, C Soon; Hong, Angela M
2015-01-01
The incidence of oropharyngeal squamous cell carcinoma (OSCC) is increasing due to the rising prevalence of human papillomavirus (HPV) positive OSCC. HPV positive OSCC is associated with better outcomes than HPV negative OSCC. Our aim was to explore the possibility that this favorable prognosis is due to the enhanced radiosensitivity of HPV positive OSCC. HPV positive OSCC cell lines were generated from the primary OSCCs of 2 patients, and corresponding HPV positive cell lines generated from nodal metastases following xenografting in nude mice. Monolayer and 3 dimensional (3D) culture techniques were used to compare the radiosensitivity of HPV positive lines with that of 2 HPV negative OSCC lines. Clonogenic and protein assays were used to measure survival post radiation. Radiation induced cell cycle changes were studied using flow cytometry. In both monolayer and 3D culture, HPV positive cells exhibited a heterogeneous appearance whereas HPV negative cells tended to be homogeneous. After irradiation, HPV positive cells had a lower survival in clonogenic assays and lower total protein levels in 3D cultures than HPV negative cells. Irradiated HPV positive cells showed a high proportion of cells in G1/S phase, increased apoptosis, an increased proliferation rate, and an inability to form 3D tumor clumps. In conclusion, HPV positive OSCC cells are more radiosensitive than HPV negative OSCC cells in vitro, supporting a more radiosensitive nature of HPV positive OSCC. PMID:26046692
The 3-dimensional, 4-channel model of human visual sensitivity to grayscale scrambles.
Silva, Andrew E; Chubb, Charles
2014-08-01
Previous research supports the claim that human vision has three dimensions of sensitivity to grayscale scrambles (textures composed of randomly scrambled mixtures of different grayscales). However, the preattentive mechanisms (called here "field-capture channels") that confer this sensitivity remain obscure. The current experiments sought to characterize the specific field-capture channels that confer this sensitivity using a task in which the participant is required to detect the location of a small patch of one type of grayscale scramble in an extended background of another type. Analysis of the results supports the existence of four field-capture channels: (1) the (previously characterized) "blackshot" channel, sharply tuned to the blackest grayscales; (2) a (previously unknown) "gray-tuned" field-capture channel whose sensitivity is zero for black rising sharply to maximum sensitivity for grayscales slightly darker than mid-gray then decreasing to half-height for brighter grayscales; (3) an "up-ramped" channel whose sensitivity is zero for black, increases linearly with increasing grayscale reaching a maximum near white; (4) a (complementary) "down-ramped" channel whose sensitivity is maximal for black, decreases linearly reaching a minimum near white. The sensitivity functions of field-capture channels (3) and (4) are linearly dependent; thus, these four field-capture channels collectively confer sensitivity to a 3-dimensional space of histogram variations. PMID:24932891
Schmidt, Marianne; Scholz, Claus-Juergen; Polednik, Christine; Roller, Jeanette
2016-04-01
In the present study a panel of 12 head and neck cancer (HNSCC) cell lines were tested for spheroid formation. Since the size and morphology of spheroids is dependent on both cell adhesion and proliferation in the 3-dimensional (3D) context, morphology of HNSCC spheroids was related to expression of E-cadherin and the proliferation marker Ki67. In HNSCC cell lines the formation of tight regular spheroids was dependent on distinct E-cadherin expression levels in monolayer cultures, usually resulting in upregulation following aggregation into 3D structures. Cell lines expressing only low levels of E-cadherin in monolayers produced only loose cell clusters, frequently decreasing E-cadherin expression further upon aggregation. In these cell lines no epidermal growth factor receptor (EGFR) upregulation occurred and proliferation generally decreased in spheroids/aggregates independent of E-cadherin expression. In a second approach a global gene expression analysis of the larynx carcinoma cell line HLaC78 monolayer and the corresponding spheroids was performed. A global upregulation of gene expression in HLaC78 spheroids was related to genes involved in cell adhesion, cell junctions and cytochrome P450-mediated metabolism of xenobiotics. Downregulation was associated with genes controlling cell cycle, DNA-replication and DNA mismatch repair. Analyzing the expression of selected genes of each functional group in monolayer and spheroid cultures of all 12 cell lines revealed evidence for common gene expression shifts in genes controlling cell junctions, cell adhesion, cell cycle and DNA replication as well as genes involved in the cytochrome P450-mediated metabolism of xenobiotics. PMID:26797047
Chrysostomou, P P; Lodish, M B; Turkbey, E B; Papadakis, G Z; Stratakis, C A
2016-04-01
Primary pigmented nodular adrenocortical disease (PPNAD) is a rare type of bilateral adrenal hyperplasia leading to hypercortisolemia. Adrenal nodularity is often appreciable with computed tomography (CT); however, accurate radiologic characterization of adrenal size in PPNAD has not been studied well. We used 3-dimensional (3D) volumetric analysis to characterize and compare adrenal size in PPNAD patients, with and without Cushing's syndrome (CS). Patients diagnosed with PPNAD and their family members with known mutations in PRKAR1A were screened. CT scans were used to create 3D models of each adrenal. Criteria for biochemical diagnosis of CS included loss of diurnal variation and/or elevated midnight cortisol levels, and paradoxical increase in urinary free cortisol and/or urinary 17-hydroxysteroids after dexamethasone administration. Forty-five patients with PPNAD (24 females, 27.8±17.6 years) and 8 controls (19±3 years) were evaluated. 3D volumetric modeling of adrenal glands was performed in all. Thirty-eight patients out of 45 (84.4%) had CS. Their mean adrenal volume was 8.1 cc±4.1, 7.2 cc±4.5 (p=0.643) for non-CS, and 8.0cc±1.6 for controls. Mean values were corrected for body surface area; 4.7 cc/kg/m(2)±2.2 for CS, and 3.9 cc/kg/m(2)±1.3 for non-CS (p=0.189). Adrenal volume and midnight cortisol in both groups was positively correlated, r=0.35, p=0.03. We conclude that adrenal volume measured by 3D CT in patients with PPNAD and CS was similar to those without CS, confirming empirical CT imaging-based observations. However, the association between adrenal volume and midnight cortisol levels may be used as a marker of who among patients with PPNAD may develop CS, something that routine CT cannot do. PMID:27065461
Goodall, Nicola; Kisiswa, Lilian; Prashar, Ankush; Faulkner, Stuart; Tokarczuk, Paweł; Singh, Krish; Erichsen, Jonathan T; Guggenheim, Jez; Halfter, Willi; Wride, Michael A
2009-10-01
Magnetic resonance imaging (MRI) is a powerful tool for generating 3-dimensional structural and functional image data. MRI has already proven valuable in creating atlases of mouse and quail development. Here, we have exploited high resolution MRI to determine the parameters necessary to acquire images of the chick embryo eye. Using a 9.4 Tesla (400 MHz) high field ultra-shielded and refrigerated magnet (Bruker), MRI was carried out on paraformaldehyde-fixed chick embryos or heads at E4, E6, E8, and E10. Image data were processed using established and custom packages (MRICro, ImageJ, ParaVision, Bruker and mri3dX). Voxel dimensions ranged from 62.5 microm to 117.2 microm. We subsequently used the images obtained from the MRI data in order to make precise measurements of chick embryo eye surface area, volume and axial length from E4 to E10. MRI was validated for accurate sizing of ocular tissue features by direct comparison with previously published literature. Furthermore, we demonstrate the utility of high resolution MRI for making accurate measurements of morphological changes due to experimental manipulation of chick eye development, thereby facilitating a better understanding of the effects on chick embryo eye development and growth of such manipulations. Chondroitin sulphate or heparin were microinjected into the vitreous cavity of the right eyes of each of 3 embryos at E5. At E10, embryos were fixed and various eye parameters (volume, surface area, axial length and equatorial diameter) were determined using MRI and normalised with respect to the un-injected left eyes. Statistically significant alterations in eye volume (p < 0.05; increases with chondroitin sulphate and decreases with heparin) and changes in vitreous homogeneity were observed in embryos following microinjection of glycosaminoglycans. Furthermore, in the heparin-injected eyes, significant disturbances at the vitreo-retinal boundary were observed as well as retinal folding and detachment
Hydroelectric structures studies using 3-dimensional methods
Harrell, T.R.; Jones, G.V.; Toner, C.K. )
1989-01-01
Deterioration and degradation of aged, hydroelectric project structures can significantly affect the operation and safety of a project. In many cases, hydroelectric headworks (in particular) have complicated geometrical configurations, loading patterns and hence, stress conditions. An accurate study of such structures can be performed using 3-dimensional computer models. 3-D computer models can be used for both stability evaluation and for finite element stress analysis. Computer aided engineering processes facilitate the use of 3-D methods in both pre-processing and post-processing of data. Two actual project examples are used to emphasize the authors' points.
Numerical model of electromagnetic scattering off a subterranean 3-dimensional dielectric
Dease, C.G.; Didwall, E.M.
1983-08-01
As part of the effort to develop On-Site Inspection (OSI) techniques for verification of compliance to a Comprehensive Test Ban Treaty (CTBT), a computer code was developed to predict the interaction of an electromagnetic (EM) wave with an underground cavity. Results from the code were used to evaluate the use of surface electromagnetic exploration techniques for detection of underground cavities or rubble-filled regions characteristic of underground nuclear explosions.
Development of a liquid jet model for implementation in a 3-dimensional Eularian analysis tool
NASA Astrophysics Data System (ADS)
Buschman, Francis X., III
The ability to model the thermal behavior of a nuclear reactor is of utmost importance to the reactor designer. Condensation is an important phenomenon when modeling a reactor system's response to a Loss Of Coolant Accident (LOCA). Condensation is even more important with the use of passive safety systems which rely on condensation heat transfer for long term cooling. The increasing use of condensation heat transfer, including condensation on jets of water, in safety systems puts added pressure to correctly model this phenomenon with thermal-hydraulic system and sub-channel analysis codes. In this work, a stand alone module with which to simulate condensation on a liquid jet was developed and then implemented within a reactor vessel analysis code to improve that code's handling of jet condensation. It is shown that the developed liquid jet model vastly improves the ability of COBRA-TF to model condensation on turbulent liquid jets. The stand alone jet model and the coupled liquid jet COBRA-TF have been compared to experimental data. Jet condensation heat transfer experiments by Celata et al. with a variety of jet diameters, velocities, and subcooling were utilized to evaluate the models. A sensitivity study on the effects of noncondensables on jet condensation was also carried out using the stand alone jet model.
Visualization of the 3-dimensional flow around a model with the aid of a laser knife
NASA Technical Reports Server (NTRS)
Borovoy, V. Y.; Ivanov, V. V.; Orlov, A. A.; Kharchenko, V. N.
1984-01-01
A method for visualizing the three-dimensional flow around models of various shapes in a wind tunnel at a Mach number of 5 is described. A laser provides a planar light flux such that any plane through the model can be selectively illuminated. The shape of shock waves and separation regions is then determined by the intensity of light scattered by soot particles in the flow.
Remanent magnetization and 3-dimensional density model of the Kentucky anomaly region
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.; Myers, D. M.
1984-01-01
A three-dimensional model of the Kentucky body was developed to fit surface gravity and long wavelength aeromagnetic data. Magnetization and density parameters for the model are much like those of Mayhew et al (1982). The magnetic anomaly due to the model at satellite altitude is shown to be much too small by itself to account for the anomaly measured by Magsat. It is demonstrated that the source region for the satellite anomaly is considerably more extensive than the Kentucky body sensu stricto. The extended source region is modeled first using prismatic model sources and then using dipole array sources. Magnetization directions for the source region found by inversion of various combinations of scalar and vector data are found to be close to the main field direction, implying the lack of a strong remanent component. It is shown by simulation that in a case (such as this) where the geometry of the source is known, if a strong remanent component is present its direction is readily detectable, but by scalar data as readily as vector data.
NASA Astrophysics Data System (ADS)
Griffioen, J.; van der Grift, B.; Maas, D.; van den Brink, C.; Zaadnoordijk, J. W.
2003-04-01
Groundwater is contaminated at the regional scale by agricultural activities and atmospheric deposition. A 3-D transport model was set-up for a phreatic drinking water winning, where the groundwater composition was monitored accurately. The winning is situated at an area with unconsolidated Pleistocene deposits. The land use is nature and agriculture. Annual mass-balances were determined using a wide range of historic data. The modelling approach for the unsaturated zone was either simple box models (Cl, NO_3 and SO_4) or 1-D transport modelling using HYDRUS (Cd). The modelling approach for the saturated zone used a multiple solute version of MT3D, where denitrification associated with pyrite oxidation and sorption of Cd were included. The solute transport calculations were performed for the period 1950--2030. The results obtained for the year 2000 were used as input concentration for the period 2000--2030. A comparison between the calculated and the measured concentrations of groundwater abstracted for Cl, NO_3 and SO_4 yields the following. First, the input at the surface is rather well estimated. Second, the redox reactivity of the first two aquifers is negligible around the winning, which is confirmed by respiration experiments using anaerobically sampled aquifer sediments. The reactivity of the third aquifer, which is a marine deposit and lies at least 30 meters below surface, is considerable. The discrepancies between modelled and measured output are explained by lack of knowledge about the subsurface reactivity and/or wrong estimates of surface loading and leaching from the unsaturated zone. The patterns for other hydrogeochemical variables such as Ca, HCO_3 may further constrain this lack of knowledge. The results for Cd indicate that Cd becomes strongly retarded, despite the low reactivity of the sandy sediments. The winning is rather insensitive to Cd contamination (but the surface water drainage network is not). Two major uncertainties for input of Cd
Sugimoto, Yoshihisa; Tanaka, Masato; Nakahara, Ryuichi; Misawa, Haruo; Kunisada, Toshiyuki; Ozaki, Toshifumi
2012-01-01
An 11 year-old girl had 66 degrees of kyphosis in the thoracolumbar junction. For the purpose of planning for kyphosis correction, we created a 3-D, full-scale model of the spine and consulted spinal navigation. Three-dimensional models are generally used as tactile guides to verify the surgical approach and portray the anatomic relations specific to a given patient. We performed posterior fusion from Th10 to L3, and vertebral column resection of Th12 and L1. Screw entry points, directions, lengths and diameters were determined by reference to navigation. Both tools were useful in the bone resection. We could easily detect the posterior element to be resected using the 3D model. During the anterior bony resection, navigation helped us to check the disc level and anterior wall of the vertebrae, which were otherwise difficult to detect due to their depth in the surgical field. Thus, the combination of navigation and 3D models helped us to safely perform surgery for a patient with complex spinal deformity. PMID:23254585
A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.
Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H
2016-06-01
Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost. PMID:26983961
NASA Astrophysics Data System (ADS)
Garay, M. J.; Diner, D. J.; Martonchik, J. V.; Davis, A. B.
2011-12-01
Knowledge of the detailed 3-dimensional structure of clouds and atmospheric aerosols is vital for correctly modeling their radiative effects and interpreting optical remote sensing measurements of scattered sunlight. We will describe a set of new observations made by the Multiangle SpectroPolarimetric Imager (MSPI) from the ground and from the NASA ER-2 aircraft. MSPI is being developed and tested at JPL as a payload for the preliminary Aerosol-Cloud-Ecosystems (PACE) satellite mission, which is expected to fly near the end of the decade. MSPI builds upon experience gained from the Multi-angle Imaging SpectroRadiometer (MISR) currently orbiting on NASA's Terra satellite. Ground-MSPI and Air-MSPI are two prototype cameras operating in the ultraviolet (UV) to the visible/near-infrared (VNIR) range mounted on gimbals that acquire imagery in a pushbroom fashion, including polarization in selected spectral bands with demonstrated high polarimetric accuracy (0.5% uncertainty in degree of linear polarization). The spatial resolution of Ground-MSPI is 1 m for objects at a distance of 3 km. From the operational altitude of the ER-2, Air-MSPI has a ground resolution of approximately 10 m at nadir. This resolution, coupled with good calibration and high polarimetric performance means that MSPI can be used to derive radiatively important parameters of aerosols and clouds using intensity and polarization information together. As part of the effort for developing retrieval algorithms for the instrument, we have employed an extremely flexible 3-dimensional vector radiative transfer code. We will show example imagery from both MSPI cameras and describe how these scenes are modeled using this code. We will also discuss some of the important unknowns and limitations of this observational approach.
3-DIMENSIONAL Geological Mapping and Modeling Activities at the Geological Survey of Norway
NASA Astrophysics Data System (ADS)
Jarna, A.; Bang-Kittilsen, A.; Haase, C.; Henderson, I. H. C.; Høgaas, F.; Iversen, S.; Seither, A.
2015-10-01
Geology and all geological structures are three-dimensional in space. Geology can be easily shown as four-dimensional when time is considered. Therefore GIS, databases, and 3D visualization software are common tools used by geoscientists to view, analyse, create models, interpret and communicate geological data. The NGU (Geological Survey of Norway) is the national institution for the study of bedrock, mineral resources, surficial deposits and groundwater and marine geology. The interest in 3D mapping and modelling has been reflected by the increase of number of groups and researches dealing with 3D in geology within NGU. This paper highlights 3D geological modelling techniques and the usage of these tools in bedrock, geophysics, urban and groundwater studies at NGU, same as visualisation of 3D online. The examples show use of a wide range of data, methods, software and an increased focus on interpretation and communication of geology in 3D. The goal is to gradually expand the geospatial data infrastructure to include 3D data at the same level as 2D.
Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn
2015-01-01
Objectives To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Methods Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. Results There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79
A 3-dimensional in vitro model of epithelioid granulomas induced by high aspect ratio nanomaterials
2011-01-01
Background The most common causes of granulomatous inflammation are persistent pathogens and poorly-degradable irritating materials. A characteristic pathological reaction to intratracheal instillation, pharyngeal aspiration, or inhalation of carbon nanotubes is formation of epithelioid granulomas accompanied by interstitial fibrosis in the lungs. In the mesothelium, a similar response is induced by high aspect ratio nanomaterials, including asbestos fibers, following intraperitoneal injection. This asbestos-like behaviour of some engineered nanomaterials is a concern for their potential adverse health effects in the lungs and mesothelium. We hypothesize that high aspect ratio nanomaterials will induce epithelioid granulomas in nonadherent macrophages in 3D cultures. Results Carbon black particles (Printex 90) and crocidolite asbestos fibers were used as well-characterized reference materials and compared with three commercial samples of multiwalled carbon nanotubes (MWCNTs). Doses were identified in 2D and 3D cultures in order to minimize acute toxicity and to reflect realistic occupational exposures in humans and in previous inhalation studies in rodents. Under serum-free conditions, exposure of nonadherent primary murine bone marrow-derived macrophages to 0.5 μg/ml (0.38 μg/cm2) of crocidolite asbestos fibers or MWCNTs, but not carbon black, induced macrophage differentiation into epithelioid cells and formation of stable aggregates with the characteristic morphology of granulomas. Formation of multinucleated giant cells was also induced by asbestos fibers or MWCNTs in this 3D in vitro model. After 7-14 days, macrophages exposed to high aspect ratio nanomaterials co-expressed proinflammatory (M1) as well as profibrotic (M2) phenotypic markers. Conclusions Induction of epithelioid granulomas appears to correlate with high aspect ratio and complex 3D structure of carbon nanotubes, not with their iron content or surface area. This model offers a time- and cost
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.
NASA Astrophysics Data System (ADS)
Kobayashi, H.; Yang, W.; Ichii, K.
2015-12-01
Global simulation of canopy scale sun-induced chlorophyll fluorescence with a 3 dimensional radiative transfer modelHideki Kobayashi, Wei Yang, and Kazuhito IchiiDepartment of Environmental Geochemical Cycle Research, Japan Agency for Marine-Earth Science and Technology3173-25, Showa-machi, Kanazawa-ku, Yokohama, Japan.Plant canopy scale sun-induced chlorophyll fluorescence (SIF) can be observed from satellites, such as Greenhouse gases Observation Satellite (GOSAT), Orbiting Carbon Observatory-2 (OCO-2), and Global Ozone Monitoring Experiment-2 (GOME-2), using Fraunhofer lines in the near infrared spectral domain [1]. SIF is used to infer photosynthetic capacity of plant canopy [2]. However, it is not well understoond how the leaf-level SIF emission contributes to the top of canopy directional SIF because SIFs observed by the satellites use the near infrared spectral domain where the multiple scatterings among leaves are not negligible. It is necessary to quantify the fraction of emission for each satellite observation angle. Absorbed photosynthetically active radiation of sunlit leaves are 100 times higher than that of shaded leaves. Thus, contribution of sunlit and shaded leaves to canopy scale directional SIF emission should also be quantified. Here, we show the results of global simulation of SIF using a 3 dimensional radiative transfer simulation with MODIS atmospheric (aerosol optical thickness) and land (land cover and leaf area index) products and a forest landscape data sets prepared for each land cover category. The results are compared with satellite-based SIF (e.g. GOME-2) and the gross primary production empirically estimated by FLUXNET and remote sensing data.
NASA Technical Reports Server (NTRS)
Fujii, K.
1983-01-01
A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.
1982-01-01
The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
Bhave, Madhura Satish; Hassanbhai, Ammar Mansoor; Anand, Padmaja; Luo, Kathy Qian; Teoh, Swee Hin
2015-01-01
Traditional cancer treatments, such as chemotherapy and radiation therapy continue to have limited efficacy due to tumor hypoxia. While bacterial cancer therapy has the potential to overcome this problem, it comes with the risk of toxicity and infection. To circumvent these issues, this paper investigates the anti-tumor effects of non-viable bacterial derivatives of Clostridium sporogenes. These non-viable derivatives are heat-inactivated C. sporogenes bacteria (IB) and the secreted bacterial proteins in culture media, known as conditioned media (CM). In this project, the effects of IB and CM on CT26 and HCT116 colorectal cancer cells were examined on a 2-Dimensional (2D) and 3-Dimensional (3D) platform. IB significantly inhibited cell proliferation of CT26 to 6.3% of the control in 72 hours for the 2D monolayer culture. In the 3D spheroid culture, cell proliferation of HCT116 spheroids notably dropped to 26.2%. Similarly the CM also remarkably reduced the cell-proliferation of the CT26 cells to 2.4% and 20% in the 2D and 3D models, respectively. Interestingly the effect of boiled conditioned media (BCM) on the cells in the 3D model was less inhibitory than that of CM. Thus, the inhibitive effect of inactivated C. sporogenes and its conditioned media on colorectal cancer cells is established. PMID:26507312
Bhave, Madhura Satish; Hassanbhai, Ammar Mansoor; Anand, Padmaja; Luo, Kathy Qian; Teoh, Swee Hin
2015-01-01
Traditional cancer treatments, such as chemotherapy and radiation therapy continue to have limited efficacy due to tumor hypoxia. While bacterial cancer therapy has the potential to overcome this problem, it comes with the risk of toxicity and infection. To circumvent these issues, this paper investigates the anti-tumor effects of non-viable bacterial derivatives of Clostridium sporogenes. These non-viable derivatives are heat-inactivated C. sporogenes bacteria (IB) and the secreted bacterial proteins in culture media, known as conditioned media (CM). In this project, the effects of IB and CM on CT26 and HCT116 colorectal cancer cells were examined on a 2-Dimensional (2D) and 3-Dimensional (3D) platform. IB significantly inhibited cell proliferation of CT26 to 6.3% of the control in 72 hours for the 2D monolayer culture. In the 3D spheroid culture, cell proliferation of HCT116 spheroids notably dropped to 26.2%. Similarly the CM also remarkably reduced the cell-proliferation of the CT26 cells to 2.4% and 20% in the 2D and 3D models, respectively. Interestingly the effect of boiled conditioned media (BCM) on the cells in the 3D model was less inhibitory than that of CM. Thus, the inhibitive effect of inactivated C. sporogenes and its conditioned media on colorectal cancer cells is established. PMID:26507312
NASA Astrophysics Data System (ADS)
Lu, Junqing; Keiter, Eric R.; Kushner, Mark J.
1998-10-01
Inductively Coupled Plasmas (ICPs) are being used for a variety of deposition processes for microelectronics fabrication. Of particular concern in scaling these devices to large areas is maintaining azimuthal symmetry of the reactant fluxes. Sources of nonuniformity may be physical (e.g., gas injection and side pumping) or electromagnetic (e.g., transmission line effects in the antennas). In this paper, a 3-dimensional plasma equipment model, HPEM-3D,(M. J. Kushner, J. Appl. Phys. v.82, 5312 (1997).) is used to investigate physical and electromagentic sources of azimuthal nonuniformities in deposition tools. An ionized metal physical vapor deposition (IMPVD) system will be investigated where transmission line effects in the coils produce an asymmetric plasma density. Long mean free path transport for sputtered neutrals and tensor conducitivities have been added to HPEM-3D to address this system. Since the coil generated ion flux drifts back to the target to sputter low ionization potential metal atoms, the asymmetry is reinforced by rapid ionization of the metal atoms.
Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George
2010-01-01
Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose
FERRARIO, VIRGILIO F.; SFORZA, CHIARELLA; SCHMITZ, JOHANNES H.; CIUSA, VERONICA; COLOMBO, ANNA
2000-01-01
A 3-dimensional computerised system with landmark representation of the soft-tissue facial surface allows noninvasive and fast quantitative study of facial growth. The aims of the present investigation were (1) to provide reference data for selected dimensions of lips (linear distances and ratios, vermilion area, volume); (2) to quantify the relevant growth changes; and (3) to evaluate sex differences in growth patterns. The 3-dimensional coordinates of 6 soft-tissue landmarks on the lips were obtained by an optoelectronic instrument in a mixed longitudinal and cross-sectional study (2023 examinations in 1348 healthy subjects between 6 y of age and young adulthood). From the landmarks, several linear distances (mouth width, total vermilion height, total lip height, upper lip height), the vermilion height-to-mouth width ratio, some areas (vermilion of the upper lip, vermilion of the lower lip, total vermilion) and volumes (upper lip volume, lower lip volume, total lip volume) were calculated and averaged for age and sex. Male values were compared with female values by means of Student's t test. Within each age group all lip dimensions (distances, areas, volumes) were significantly larger in boys than in girls (P < 0.05), with some exceptions in the first age groups and coinciding with the earlier female growth spurt, whereas the vermilion height-to-mouth width ratio did not show a corresponding sexual dimorphism. Linear distances in girls had almost reached adult dimensions in the 13–14 y age group, while in boys a large increase was still to occur. The attainment of adult dimensions was faster in the upper than in the lower lip, especially in girls. The method used in the present investigation allowed the noninvasive evaluation of a large sample of nonpatient subjects, leading to the definition of 3-dimensional normative data. Data collected in the present study could represent a data base for the quantitative description of human lip morphology from childhood to
The use of TOUGH2 for the LBL/USGS 3-dimensional site-scale model of Yucca Mountain, Nevada
Bodvarsson, G.; Chen, G.; Haukwa, C.; Kwicklis, E.
1995-12-31
The three-dimensional site-scale numerical model o the unsaturated zone at Yucca Mountain is under continuous development and calibration through a collaborative effort between Lawrence Berkeley Laboratory (LBL) and the United States Geological Survey (USGS). The site-scale model covers an area of about 30 km{sup 2} and is bounded by major fault zones to the west (Solitario Canyon Fault), east (Bow Ridge Fault) and perhaps to the north by an unconfirmed fault (Yucca Wash Fault). The model consists of about 5,000 grid blocks (elements) with nearly 20,000 connections between them; the grid was designed to represent the most prevalent geological and hydro-geological features of the site including major faults, and layering and bedding of the hydro-geological units. Submodels are used to investigate specific hypotheses and their importance before incorporation into the three-dimensional site-scale model. The primary objectives of the three-dimensional site-scale model are to: (1) quantify moisture, gas and heat flows in the ambient conditions at Yucca Mountain, (2) help in guiding the site-characterization effort (primarily by USGS) in terms of additional data needs and to identify regions of the mountain where sufficient data have been collected, and (3) provide a reliable model of Yucca Mountain that is validated by repeated predictions of conditions in new boreboles and the ESF and has therefore the confidence of the public and scientific community. The computer code TOUGH2 developed by K. Pruess at LBL was used along with the three-dimensional site-scale model to generate these results. In this paper, we also describe the three-dimensional site-scale model emphasizing the numerical grid development, and then show some results in terms of moisture, gas and heat flow.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
A 3-dimensional ray-trace model for predicting the performance of flashlamp-pumped laser amplifiers
Jancaitis, K.S.; Haney, S.W.; Munro, D.H.; Le Touze, G.; Cabourdin, O.
1997-02-13
We have developed a fully three-dimensional model for the performance of flashlamp pumped laser amplifiers. The model uses a reverse ray-trace technique to calculate the pumping of the laser glass by the flashlamp radiation. We have discovered several different methods by which we can speed up the calculation of the gain profile in a amplifier. The model predicts the energy-storage performance of the Beamlet amplifiers to better than 5%. This model will be used in the optimization of the National Ignition Facility (NIF) amplifier design.
Lambros, Maria P.; DeSalvo, Michael K.; Moreno, Jonathan; Mulamalla, Hari Chandana; Kondapalli, Lavanya
2015-01-01
Cancer patients who receive radiation are often afflicted by oral mucositis, a debilitating disease, characterized by mouth sores and difficulty in swallowing. Oftentimes, cancer patients afflicted with mucositis must stop life-saving therapies. Thus it is very important to prevent mucositis before it develops. Using a validated organotypic model of human oral mucosa, a 3-dimensional cell culture model of human oral keratinocytes, it has been shown that a mixture (NAC–QYD) of N-acetyl cysteine (NAC) and a traditional Chinese medicine, Qingre Liyan decoction (QYD), prevented radiation damage (Lambros et al., 2014). Here we provide detailed methods and analysis of microarray data for non-irradiated and irradiated human oral mucosal tissue with and without pretreatment with NAC, QYD and NAC-QYD. The microarray data been deposited in Gene Expression Omnibus (GEO): GSE62397. These data can be used to further elucidate the mechanisms of irradiation damage in oral mucosa and its prevention. PMID:26697327
Lambros, Maria P; DeSalvo, Michael K; Moreno, Jonathan; Mulamalla, Hari Chandana; Kondapalli, Lavanya
2015-12-01
Cancer patients who receive radiation are often afflicted by oral mucositis, a debilitating disease, characterized by mouth sores and difficulty in swallowing. Oftentimes, cancer patients afflicted with mucositis must stop life-saving therapies. Thus it is very important to prevent mucositis before it develops. Using a validated organotypic model of human oral mucosa, a 3-dimensional cell culture model of human oral keratinocytes, it has been shown that a mixture (NAC-QYD) of N-acetyl cysteine (NAC) and a traditional Chinese medicine, Qingre Liyan decoction (QYD), prevented radiation damage (Lambros et al., 2014). Here we provide detailed methods and analysis of microarray data for non-irradiated and irradiated human oral mucosal tissue with and without pretreatment with NAC, QYD and NAC-QYD. The microarray data been deposited in Gene Expression Omnibus (GEO): GSE62397. These data can be used to further elucidate the mechanisms of irradiation damage in oral mucosa and its prevention. PMID:26697327
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able...
NASA Astrophysics Data System (ADS)
Roch, Julien; Clevede, Eric; Roult, Genevieve
2010-05-01
The 26 December 2004 Sumatra-Andaman event is the third biggest earthquake that has never been recorded but the first recorded with high quality broad-band seismometers. Such an earthquake offered a good opportunity for studying the normal modes of the Earth and particularly the gravest ones (frequency lower than 1 mHz) which provide important information on deep Earth. The splitting of some modes has been carefully analyzed. The eigenfrequencies and the Q quality factors of particular singlets have been retrieved with an unprecedented precision. In some cases, the eigenfrequencies of some singlets exhibit a clear shift when compared to the theoretical eigenfrequencies. Some core modes such as the 3S2 mode present an anomalous splitting, that is to say, a splitting width much larger than the expected one. Such anomalous splitting is presently admitted to be due to the existence of lateral heterogeneities in the inner core. We need an accurate model of the whole Earth and a method to compute synthetic seismograms in order to compare synthetic and observed data and to explain the behavior of such modes. Synthetic seismograms are computed by normal modes summation using a perturbative method developed up to second order in amplitude and up to third order in frequency (HOPT method). The last step consists in inverting both eigenfrequency and Q quality factor datasets in order to better constrain the deep Earth structure and especially the inner core. In order to find models of acceptable data fit in a multidimensional parameter space, we use the neighborhood algorithm method which is a derivative-free search method. It is particularly well adapted in our case (non linear problem) and is easy to tune with only 2 parameters. Our purpose is to find an ensemble of models that fit the data rather than a unique model.
Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus
2015-10-01
Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532
Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus
2015-01-01
Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532
NASA Astrophysics Data System (ADS)
Bahlake, Ahmad; Farivar, Foad; Dabir, Bahram
2016-07-01
In this paper a 3-dimensional modeling of simultaneous stripping of carbon dioxide (CO2) and hydrogen sulfide (H2S) from water using hollow fiber membrane made of polyvinylidene fluoride is developed. The water, containing CO2 and H2S enters to the membrane as feed. At the same time, pure nitrogen flow in the shell side of a shell and tube hollow fiber as the solvent. In the previous methods of modeling hollow fiber membranes just one of the membranes was modeled and the results expand to whole shell and tube system. In this research the whole hollow fiber shell and tube module is modeled to reduce the errors. Simulation results showed that increasing the velocity of solvent flow and decreasing the velocity of the feed are leads to increase in the system yield. However the effect of the feed velocity on the process is likely more than the influence of changing the velocity of the gaseous solvent. In addition H2S stripping has higher yield in comparison with CO2 stripping. This model is compared to the previous modeling methods and shows that the new model is more accurate. Finally, the effect of feed temperature is studied using response surface method and the operating conditions of feed temperature, feed velocity, and solvent velocity is optimized according to synergistic effects. Simulation results show that, in the optimum operating conditions the removal percentage of H2S and CO2 are 27 and 21 % respectively.
NASA Astrophysics Data System (ADS)
Bahlake, Ahmad; Farivar, Foad; Dabir, Bahram
2015-08-01
In this paper a 3-dimensional modeling of simultaneous stripping of carbon dioxide (CO2) and hydrogen sulfide (H2S) from water using hollow fiber membrane made of polyvinylidene fluoride is developed. The water, containing CO2 and H2S enters to the membrane as feed. At the same time, pure nitrogen flow in the shell side of a shell and tube hollow fiber as the solvent. In the previous methods of modeling hollow fiber membranes just one of the membranes was modeled and the results expand to whole shell and tube system. In this research the whole hollow fiber shell and tube module is modeled to reduce the errors. Simulation results showed that increasing the velocity of solvent flow and decreasing the velocity of the feed are leads to increase in the system yield. However the effect of the feed velocity on the process is likely more than the influence of changing the velocity of the gaseous solvent. In addition H2S stripping has higher yield in comparison with CO2 stripping. This model is compared to the previous modeling methods and shows that the new model is more accurate. Finally, the effect of feed temperature is studied using response surface method and the operating conditions of feed temperature, feed velocity, and solvent velocity is optimized according to synergistic effects. Simulation results show that, in the optimum operating conditions the removal percentage of H2S and CO2 are 27 and 21 % respectively.
Pediatric Computational Models
NASA Astrophysics Data System (ADS)
Soni, Bharat K.; Kim, Jong-Eun; Ito, Yasushi; Wagner, Christina D.; Yang, King-Hay
A computational model is a computer program that attempts to simulate a behavior of a complex system by solving mathematical equations associated with principles and laws of physics. Computational models can be used to predict the body's response to injury-producing conditions that cannot be simulated experimentally or measured in surrogate/animal experiments. Computational modeling also provides means by which valid experimental animal and cadaveric data can be extrapolated to a living person. Widely used computational models for injury biomechanics include multibody dynamics and finite element (FE) models. Both multibody and FE methods have been used extensively to study adult impact biomechanics in the past couple of decades.
NASA Astrophysics Data System (ADS)
Király, E.; Székely, B.; Bata, T.; Lócsi, L.; Karátson, D.
2009-04-01
The almost global availability of medium- and high-resolution Digital Terrain Models (DTMs) paved the way of new approaches in volcanic geomorphology. The increasing importance of understanding of surface processes that act during the degradation of volcanic edifices also mean a demand for geometric modeling of their surface, in order to derive parameters from the topography that are suitable for further analysis. Our study area, the San Francisco Volcanic Field (SFVF), is a ca. 4500 km2-large volcanic region situated around the San Francisco stratovolcano at Flagstaff, Arizona (USA) that hosts some 600 scoria and lava domes, numerous lava flows with extensive volcanic ash deposits. Because of the wide range in size and age, as well as contrasting degradation of these volcanic features, several authors have analysed them in the last decades to derive general rules of their lowering. Morphometric parameters were determined that were expected to be suitable to fulfill this requirement. In his pioneering work, Wood (1980a,b) considered 40 scoria cones, while almost two decades later Hooper and Sheridan (1998) included 237 features in their study. Their manual morphometric analyses were based on topographic maps that are time consuming, therefore their limited scope can now be extended with the availability of digital data. In the initial phase of our project more than 300 cones were analysed using the classic approach (height of the cone, width of the cone and crater, etc.). Additionally the slope histogram were analysed in order to classify the cones into different evolutionary categories. These analyses led to the selection of a few volcanoes, that entered in the next processing phase. Firstly the derivation of parameters in two-dimensional approach were carried out. Horizontal and vertical cross sections were extracted from the DTM, and the resulting planar curves were analysed via parameter estimation. The horizontal planar outlines were approached with circles
Tam, Tze-wai; Ang, Put O
2009-07-21
A 3-dimensional individual-based model, the ReefModel, was developed to simulate the dynamical structure of coral reef community using object-oriented techniques. Interactions among functional groups of reef organisms were simulated in the model. The behaviours of these organisms were described with simple mechanistic rules that were derived from their general behaviours (e.g. growing habits, competitive mechanisms, response to physical disturbance) observed in natural coral reef communities. The model was implemented to explore the effects of physical disturbance on the dynamical structure of a 3-coral community that was characterized with three functional coral groups: tabular coral, foliaceous coral and massive coral. Simulation results suggest that (i) the integration of physical disturbance and differential responses (disturbance sensitivity and growing habit) of corals plays an important role in structuring coral communities; (ii) diversity of coral communities can be maximal under intermediate level of acute physical disturbance; (iii) multimodality exists in the final states and dynamic regimes of individual coral group as well as coral community structure, which results from the influence of small random spatial events occurring during the interactions among the corals in the community, under acute and repeated physical disturbances. These results suggest that alternative stable states and catastrophic regime shifts may exist in a coral community under unstable physical environment. PMID:19306887
Computational Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)
1995-01-01
This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.
Fuller, Sam M; Butz, Daniel R; Vevang, Curt B; Makhlouf, Mansour V
2014-09-01
Three-dimensional printing is being rapidly incorporated in the medical field to produce external prosthetics for improved cosmesis and fabricated molds to aid in presurgical planning. Biomedically engineered products from 3-dimensional printers are also utilized as implantable devices for knee arthroplasty, airway orthoses, and other surgical procedures. Although at first expensive and conceptually difficult to construct, 3-dimensional printing is now becoming more affordable and widely accessible. In hand surgery, like many other specialties, new or customized instruments would be desirable; however, the overall production cost restricts their development. We are presenting our step-by-step experience in creating a bone reduction clamp for finger fractures using 3-dimensional printing technology. Using free, downloadable software, a 3-dimensional model of a bone reduction clamp for hand fractures was created based on the senior author's (M.V.M.) specific design, previous experience, and preferences for fracture fixation. Once deemed satisfactory, the computer files were sent to a 3-dimensional printing company for the production of the prototypes. Multiple plastic prototypes were made and adjusted, affording a fast, low-cost working model of the proposed clamp. Once a workable design was obtained, a printing company produced the surgical clamp prototype directly from the 3-dimensional model represented in the computer files. This prototype was used in the operating room, meeting the expectations of the surgeon. Three-dimensional printing is affordable and offers the benefits of reducing production time and nurturing innovations in hand surgery. This article presents a step-by-step description of our design process using online software programs and 3-dimensional printing services. As medical technology advances, it is important that hand surgeons remain aware of available resources, are knowledgeable about how the process works, and are able to take advantage of
NASA Technical Reports Server (NTRS)
2000-01-01
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Cardiothoracic Applications of 3-dimensional Printing.
Giannopoulos, Andreas A; Steigner, Michael L; George, Elizabeth; Barile, Maria; Hunsaker, Andetta R; Rybicki, Frank J; Mitsouras, Dimitris
2016-09-01
Medical 3-dimensional (3D) printing is emerging as a clinically relevant imaging tool in directing preoperative and intraoperative planning in many surgical specialties and will therefore likely lead to interdisciplinary collaboration between engineers, radiologists, and surgeons. Data from standard imaging modalities such as computed tomography, magnetic resonance imaging, echocardiography, and rotational angiography can be used to fabricate life-sized models of human anatomy and pathology, as well as patient-specific implants and surgical guides. Cardiovascular 3D-printed models can improve diagnosis and allow for advanced preoperative planning. The majority of applications reported involve congenital heart diseases and valvular and great vessels pathologies. Printed models are suitable for planning both surgical and minimally invasive procedures. Added value has been reported toward improving outcomes, minimizing perioperative risk, and developing new procedures such as transcatheter mitral valve replacements. Similarly, thoracic surgeons are using 3D printing to assess invasion of vital structures by tumors and to assist in diagnosis and treatment of upper and lower airway diseases. Anatomic models enable surgeons to assimilate information more quickly than image review, choose the optimal surgical approach, and achieve surgery in a shorter time. Patient-specific 3D-printed implants are beginning to appear and may have significant impact on cosmetic and life-saving procedures in the future. In summary, cardiothoracic 3D printing is rapidly evolving and may be a potential game-changer for surgeons. The imager who is equipped with the tools to apply this new imaging science to cardiothoracic care is thus ideally positioned to innovate in this new emerging imaging modality. PMID:27149367
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1985-01-01
The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.
2010-01-01
Background Animal models of focal cerebral ischemia are widely used in stroke research. The purpose of our study was to evaluate and compare the cerebral macro- and microvascular architecture of rats in two different models of permanent middle cerebral artery occlusion using an innovative quantitative micro- and nano-CT imaging technique. Methods 4h of middle cerebral artery occlusion was performed in rats using the macrosphere method or the suture technique. After contrast perfusion, brains were isolated and scanned en-bloc using micro-CT (8 μm)3 or nano-CT at 500 nm3 voxel size to generate 3D images of the cerebral vasculature. The arterial vascular volume fraction and gray scale attenuation was determined and the significance of differences in measurements was tested with analysis of variance [ANOVA]. Results Micro-CT provided quantitative information on vascular morphology. Micro- and nano-CT proved to visualize and differentiate vascular occlusion territories performed in both models of cerebral ischemia. The suture technique leads to a remarkable decrease in the intravascular volume fraction of the middle cerebral artery perfusion territory. Blocking the medial cerebral artery with macrospheres, the vascular volume fraction of the involved hemisphere decreased significantly (p < 0.001), independently of the number of macrospheres, and was comparable to the suture method. We established gray scale measurements by which focal cerebral ischemia could be radiographically categorized (p < 0.001). Nano-CT imaging demonstrates collateral perfusion related to different occluded vessel territories after macrosphere perfusion. Conclusion Micro- and Nano-CT imaging is feasible for analysis and differentiation of different models of focal cerebral ischemia in rats. PMID:20509884
Computer Modeling and Simulation
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
3-dimensional Oil Drift Simulations
NASA Astrophysics Data System (ADS)
Wettre, C.; Reistad, M.; Hjøllo, B.Å.
Simulation of oil drift has been an ongoing activity at the Norwegian Meteorological Institute since the 1970's. The Marine Forecasting Centre provides a 24-hour service for the Norwegian Pollution Control Authority and the oil companies operating in the Norwegian sector. The response time is 30 minutes. From 2002 the service is extended to simulation of oil drift from oil spills in deep water, using the DeepBlow model developed by SINTEF Applied Chemistry. The oil drift model can be applied both for instantaneous and continuous releases. The changes in the mass of oil and emulsion as a result of evaporation and emulsion are computed. For oil spill at deep water, hydrate formation and gas dissolution are taken into account. The properties of the oil depend on the oil type, and in the present version 64 different types of oil can be simulated. For accurate oil drift simulations it is important to have the best possible data on the atmospheric and oceanic conditions. The oil drift simulations at the Norwegian Meteorological Institute are always based on the most updated data from numerical models of the atmosphere and the ocean. The drift of the surface oil is computed from the vectorial sum of the surface current from the ocean model and the wave induced Stokes drift computed from wave energy spectra from the wave prediction model. In the new model the current distribution with depth is taken into account when calculating the drift of the dispersed oil droplets. Salinity and temperature profiles from the ocean model are needed in the DeepBlow model. The result of the oil drift simulations can be plotted on sea charts used for navigation, either as trajectory plots or particle plots showing the situation at a given time. The results can also be sent as data files to be included in the user's own GIS system.
NASA Astrophysics Data System (ADS)
Grandi, C.; Bonacorsi, D.; Colling, D.; Fisk, I.; Girone, M.
2014-06-01
The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.
Computationally modeling interpersonal trust
Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David
2013-01-01
We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Misawa, Takeharu; Takase, Kazuyuki
Two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method or particle interaction method. Therefore, two-fluid model is useful for thermal hydraulic analysis in large-scale domain such as a rod bundle. Japan Atomic Energy Agency (JAEA) develops three dimensional two-fluid model analysis code ACE-3D that adopts boundary fitted coordinate system in order to simulate complex shape flow channel. In this paper, boiling two-phase flow analysis in a tight-lattice rod bundle was performed by ACE-3D code. The parallel computation using 126 CPUs was applied to this analysis. In the results, the void fraction, which distributes in outermost region of rod bundle, is lower than that in center region of rod bundle. The tendency of void fraction distribution agreed with the measurement results by neutron radiography qualitatively. To evaluate effects of two-phase flow model used in ACE-3D code, numerical simulation of boiling two-phase in tight-lattice rod bundle with no lift force model was also performed. From the comparison of calculated results, it was concluded that the effects of lift force model were not so large for overall void fraction distribution of tight-lattice rod bundle. However, the lift force model is important for local void fraction distribution of fuel bundles.
Computational models of planning.
Geffner, Hector
2013-07-01
The selection of the action to do next is one of the central problems faced by autonomous agents. Natural and artificial systems address this problem in various ways: action responses can be hardwired, they can be learned, or they can be computed from a model of the situation, the actions, and the goals. Planning is the model-based approach to action selection and a fundamental ingredient of intelligent behavior in both humans and machines. Planning, however, is computationally hard as the consideration of all possible courses of action is not computationally feasible. The problem has been addressed by research in Artificial Intelligence that in recent years has uncovered simple but powerful computational principles that make planning feasible. The principles take the form of domain-independent methods for computing heuristics or appraisals that enable the effective generation of goal-directed behavior even over huge spaces. In this paper, we look at several planning models, at methods that have been shown to scale up to large problems, and at what these methods may suggest about the human mind. WIREs Cogn Sci 2013, 4:341-356. doi: 10.1002/wcs.1233 The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304223
Chung, S.; McGill, M.; Preece, D.S.
1994-12-31
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the advance in computer technology has increased the computing power of small work stations as well as PC (personal computers) to permit a much shorter turn-around time for complex computations. The DMC can perform a blast simulation in 0.5 hours on the SUN SPARC station 10-41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
Computer Modeling Of Atomization
NASA Technical Reports Server (NTRS)
Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.
1994-01-01
Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.
Computer modelling of minerals
NASA Astrophysics Data System (ADS)
Catlow, C. R. A.; Parker, S. C.
We review briefly the methodology and achievements of computer simulation techniques in modelling structural and defect properties of inorganic solids. Special attention is paid to the role of interatomic potentials in such studies. We discuss the extension of the techniques to the modelling of minerals, and describe recent results on the study of structural properties of silicates. In a paper of this length, it is not possible to give a comprehensive survey of this field. We shall concentrate on the recent work of our own group. The reader should consult Tossell (1977), Gibbs (1982), and Busing (1970) for examples of other computational studies of inorganic solids. The techniques we discuss are all based on the principle of energy minimization. Simpler, "bridge-buildingrdquo procedures, based on known bond-lengths, of which distance least squares (DLS) techniques are the best known are discussed, for example, in Dempsey and Strens (1974).
Understanding student computational thinking with computational modeling
NASA Astrophysics Data System (ADS)
Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.
2013-01-01
Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.
NASA Technical Reports Server (NTRS)
Green, Terry J.
1988-01-01
A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2010-01-01
A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.
Computer modeling of photodegradation
NASA Technical Reports Server (NTRS)
Guillet, J.
1986-01-01
A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.
NASA Astrophysics Data System (ADS)
Kopper, Claudio; Antares Collaboration
2013-10-01
Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.
Computational modelling of atherosclerosis.
Parton, Andrew; McGilligan, Victoria; O'Kane, Maurice; Baldrick, Francina R; Watterson, Steven
2016-07-01
Atherosclerosis is one of the principle pathologies of cardiovascular disease with blood cholesterol a significant risk factor. The World Health Organization estimates that approximately 2.5 million deaths occur annually because of the risk from elevated cholesterol, with 39% of adults worldwide at future risk. Atherosclerosis emerges from the combination of many dynamical factors, including haemodynamics, endothelial damage, innate immunity and sterol biochemistry. Despite its significance to public health, the dynamics that drive atherosclerosis remain poorly understood. As a disease that depends on multiple factors operating on different length scales, the natural framework to apply to atherosclerosis is mathematical and computational modelling. A computational model provides an integrated description of the disease and serves as an in silico experimental system from which we can learn about the disease and develop therapeutic hypotheses. Although the work completed in this area to date has been limited, there are clear signs that interest is growing and that a nascent field is establishing itself. This article discusses the current state of modelling in this area, bringing together many recent results for the first time. We review the work that has been done, discuss its scope and highlight the gaps in our understanding that could yield future opportunities. PMID:26438419
Chung, S.; McGill, M.; Preece, D.S.
1994-07-01
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
Computational modelling of polymers
NASA Technical Reports Server (NTRS)
Celarier, Edward A.
1991-01-01
Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.
3-dimensional bioprinting for tissue engineering applications.
Gu, Bon Kang; Choi, Dong Jin; Park, Sang Jun; Kim, Min Sup; Kang, Chang Mo; Kim, Chun-Ho
2016-01-01
The 3-dimensional (3D) printing technologies, referred to as additive manufacturing (AM) or rapid prototyping (RP), have acquired reputation over the past few years for art, architectural modeling, lightweight machines, and tissue engineering applications. Among these applications, tissue engineering field using 3D printing has attracted the attention from many researchers. 3D bioprinting has an advantage in the manufacture of a scaffold for tissue engineering applications, because of rapid-fabrication, high-precision, and customized-production, etc. In this review, we will introduce the principles and the current state of the 3D bioprinting methods. Focusing on some of studies that are being current application for biomedical and tissue engineering fields using printed 3D scaffolds. PMID:27114828
Teleportation of a 3-dimensional GHZ State
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Wang, Huai-Sheng; Li, Peng-Fei; Song, He-Shan
2012-05-01
The process of teleportation of a completely unknown 3-dimensional GHZ state is considered. Three maximally entangled 3-dimensional Bell states function as quantum channel in the scheme. This teleportation scheme can be directly generalized to teleport an unknown d-dimensional GHZ state.
Workshop on Computational Turbulence Modeling
Not Available
1993-01-01
This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow. Separate abstracts have been prepared for articles from this report.
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
1993-01-01
This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.
Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...
NASA Technical Reports Server (NTRS)
Kubrynski, Krzysztof
1991-01-01
A subcritical panel method applied to flow analysis and aerodynamic design of complex aircraft configurations is presented. The analysis method is based on linearized, compressible, subsonic flow equations and indirect Dirichlet boundary conditions. Quadratic dipol and linear source distribution on flat panels are applied. In the case of aerodynamic design, the geometry which minimizes differences between design and actual pressure distribution is found iteratively, using numerical optimization technique. Geometry modifications are modeled by surface transpiration concept. Constraints in respect to resulting geometry can be specified. A number of complex 3-dimensional design examples are presented. The software is adopted to personal computers, and as result an unexpected low cost of computations is obtained.
Improving Perceptual Skills with 3-Dimensional Animations.
ERIC Educational Resources Information Center
Johns, Janet Faye; Brander, Julianne Marie
1998-01-01
Describes three-dimensional computer aided design (CAD) models for every component in a representative mechanical system; the CAD models made it easy to generate 3-D animations that are ideal for teaching perceptual skills in multimedia computer-based technical training. Fifteen illustrations are provided. (AEF)
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able ...
Kohyama, Hiroaki
2008-07-01
We construct the phase diagram of the quark-antiquark and diquark condensates at finite temperature and density in the 2+1 dimensional (3D) two flavor massless Gross-Neveu (GN) model with the 4-component quarks. In contrast to the case of the 2-component quarks, there appears the coexisting phase of the quark-antiquark and diquark condensates. This is the crucial difference between the 2-component and 4-component quark cases in the 3D GN model. The coexisting phase is also seen in the 4D Nambu Jona-Lasinio model. Then we see that the 3D GN model with the 4-component quarks bears closer resemblance to the 4D Nambu Jona-Lasinio model.
Kobayashi, Kazuyoshi; Imagama, Shiro; Muramoto, Akio; Ito, Zenya; Ando, Kei; Yagi, Hideki; Hida, Tetsuro; Ito, Kenyu; Ishikawa, Yoshimoto; Tsushima, Mikito; Ishiguro, Naoki
2015-01-01
ABSTRACT In severe spinal deformity, pain and neurological disorder may be caused by spinal cord compression. Surgery for spinal reconstruction is desirable, but may be difficult in a case with severe deformity. Here, we show the utility of a 3D NaCl (salt) model in preoperative planning of anterior reconstruction using a rib strut in a 49-year-old male patient with cervicothoracic degenerative spondylosis. We performed surgery in two stages: a posterior approach with decompression and posterior instrumentation with a pedicle screw; followed by a second operation using an anterior approach, for which we created a 3D NaCl model including the cervicothoracic lesion, spinal deformity, and ribs for anterior reconstruction. The 3D NaCl model was easily scraped compared with a conventional plaster model and was useful for planning of resection and identification of a suitable rib for grafting in a preoperative simulation. Surgery was performed successfully with reference to the 3D NaCl model. We conclude that preoperative simulation with a 3D NaCl model contributes to performance of anterior reconstruction using a rib strut in a case of cervicothoracic deformity. PMID:26412901
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Computer-Aided Geometry Modeling
NASA Technical Reports Server (NTRS)
Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)
1984-01-01
Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.
Hwang, Minki; Song, Jun-Seop; Lee, Young-Seon; Joung, Boyoung; Pak, Hui-Nam
2016-01-01
Background We previously reported that stable rotors were observed in in-silico human atrial fibrillation (AF) models, and were well represented by dominant frequency (DF). We explored the spatiotemporal stability of DF sites in 3D-AF models imported from patient CT images of the left atrium (LA). Methods We integrated 3-D CT images of the LA obtained from ten patients with persistent AF (male 80%, 61.8 ± 13.5 years old) into an in-silico AF model. After induction, we obtained 6 seconds of AF simulation data for DF analyses in 30 second intervals (T1–T9). The LA was divided into ten sections. Spatiotemporal changes and variations in the temporal consistency of DF were evaluated at each section of the LA. The high DF area was defined as the area with the highest 10% DF. Results 1. There was no spatial consistency in the high DF distribution at each LA section during T1–T9 except in one patient (p = 0.027). 2. Coefficients of variation for the high DF area were highly different among the ten LA sections (p < 0.001), and they were significantly higher in the four pulmonary vein (PV) areas, the LA appendage, and the peri-mitral area than in the other LA sections (p < 0.001). 3. When we conducted virtual ablation of 10%, 15%, and 20% of the highest DF areas (n = 270 cases), AF was changed to atrial tachycardia (AT) or terminated at a rate of 40%, 57%, and 76%, respectively. Conclusions Spatiotemporal consistency of the DF area was observed in 10% of AF patients, and high DF areas were temporally variable. Virtual ablation of DF is moderately effective in AF termination and AF changing into AT. PMID:27459377
Dutta, Debargh K; Potnis, Pushya A; Rhodes, Kelly; Wood, Steven C
2015-01-01
Multinucleate giant cells (MGCs) are formed by the fusion of 5 to 15 monocytes or macrophages. MGCs can be generated by hip implants at the site where the metal surface of the device is in close contact with tissue. MGCs play a critical role in the inflammatory processes associated with adverse events such as aseptic loosening of the prosthetic joints and bone degeneration process called osteolysis. Upon interaction with metal wear particles, endothelial cells upregulate pro-inflammatory cytokines and other factors that enhance a localized immune response. However, the role of endothelial cells in the generation of MGCs has not been completely investigated. We developed a three-dimensional peripheral tissue-equivalent model (PTE) consisting of collagen gel, supporting a monolayer of endothelial cells and human peripheral blood mononuclear cells (PBMCs) on top, which mimics peripheral tissue under normal physiological conditions. The cultures were incubated for 14 days with Cobalt chromium alloy (CoCr ASTM F75, 1-5 micron) wear particles. PBMC were allowed to transit the endothelium and harvested cells were analyzed for MGC generation via flow cytometry. An increase in forward scatter (cell size) and in the propidium iodide (PI) uptake (DNA intercalating dye) was used to identify MGCs. Our results show that endothelial cells induce the generation of MGCs to a level 4 fold higher in 3-dimentional PTE system as compared to traditional 2-dimensional culture plates. Further characterization of MGCs showed upregulated expression of tartrate resistant alkaline phosphatase (TRAP) and dendritic cell specific transmembrane protein, (DC-STAMP), which are markers of bone degrading cells called osteoclasts. In sum, we have established a robust and relevant model to examine MGC and osteoclast formation in a tissue like environment using flow cytometry and RT-PCR. With endothelial cells help, we observed a consistent generation of metal wear particle- induced MGCs, which heralds
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
NASA Technical Reports Server (NTRS)
Patel, Umesh D.; DellaTorre, Edward; Day, John H. (Technical Monitor)
2001-01-01
A fast differential equation approach for the DOK model has been extented to the CMH model. Also, a cobweb technique for calculating the CMH model is also presented. The two techniques are contrasted from the point of view of flexibility and computation time.
COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS
Mathur, M.P.; Freeman, Mark; Gera, Dinesh
2001-11-06
In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.
NASA Technical Reports Server (NTRS)
Franz, J. R.
1993-01-01
Numerical calculations of the electronic properties of liquid II-VI semiconductors, particularly CdTe and ZnTe were performed. The measured conductivity of these liquid alloys were modeled by assuming that the dominant temperature effect is the increase in the number of dangling bonds with increasing temperature. For low to moderate values of electron correlation, the calculated conductivity as a function of dangling bond concentration closely follows the measured conductivity as a function of temperature. Both the temperature dependence of the chemical potential and the thermal smearing in region of the Fermi surface have a large effect on calculated values of conductivity.
Computational models of epileptiform activity.
Wendling, Fabrice; Benquet, Pascal; Bartolomei, Fabrice; Jirsa, Viktor
2016-02-15
We reviewed computer models that have been developed to reproduce and explain epileptiform activity. Unlike other already-published reviews on computer models of epilepsy, the proposed overview starts from the various types of epileptiform activity encountered during both interictal and ictal periods. Computational models proposed so far in the context of partial and generalized epilepsies are classified according to the following taxonomy: neural mass, neural field, detailed network and formal mathematical models. Insights gained about interictal epileptic spikes and high-frequency oscillations, about fast oscillations at seizure onset, about seizure initiation and propagation, about spike-wave discharges and about status epilepticus are described. This review shows the richness and complementarity of the various modeling approaches as well as the fruitful contribution of the computational neuroscience community in the field of epilepsy research. It shows that models have progressively gained acceptance and are now considered as an efficient way of integrating structural, functional and pathophysiological data about neural systems into "coherent and interpretable views". The advantages, limitations and future of modeling approaches are discussed. Perspectives in epilepsy research and clinical epileptology indicate that very promising directions are foreseen, like model-guided experiments or model-guided therapeutic strategy, among others. PMID:25843066
Computational modeling of properties
NASA Technical Reports Server (NTRS)
Franz, Judy R.
1994-01-01
A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.
Computational modeling of properties
NASA Technical Reports Server (NTRS)
Franz, Judy R.
1994-01-01
A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wide-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid 2-6 semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.
3-dimensional imaging at nanometer resolutions
Werner, James H.; Goodwin, Peter M.; Shreve, Andrew P.
2010-03-09
An apparatus and method for enabling precise, 3-dimensional, photoactivation localization microscopy (PALM) using selective, two-photon activation of fluorophores in a single z-slice of a sample in cooperation with time-gated imaging for reducing the background radiation from other image planes to levels suitable for single-molecule detection and spatial location, are described.
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Ch. 33 Modeling: Computational Thermodynamics
Besmann, Theodore M
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Computational Modeling of Multiphase Reactors.
Joshi, J B; Nandakumar, K
2015-01-01
Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling. PMID:26134737
Computational models of adult neurogenesis
NASA Astrophysics Data System (ADS)
Cecchi, Guillermo A.; Magnasco, Marcelo O.
2005-10-01
Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.
Models for computing combat risk
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2002-07-01
Combat always involves uncertainty and uncertainty entails risk. To ensure that a combat task is prosecuted with the desired probability of success, the task commander has to devise an appropriate task force and then adjust it continuously in the course of battle. In order to do so, he has to evaluate how the probability of task success is related to the structure, capabilities and numerical strengths of combatants. For this purpose, predictive models of combat dynamics for combats in which the combatants fire asynchronously at random instants are developed from the first principles. Combats involving forces with both unlimited and limited ammunition supply are studied and modeled by stochastic Markov processes. In addition to the Markov models, another class of models first proposed by Brown was explored. The models compute directly the probability of win, in which we are primarily interested, without integrating the state probability equations. Experiments confirm that they produce exactly the same results at much lower computational cost.
The 3-dimensional cellular automata for HIV infection
NASA Astrophysics Data System (ADS)
Mo, Youbin; Ren, Bin; Yang, Wencao; Shuai, Jianwei
2014-04-01
The HIV infection dynamics is discussed in detail with a 3-dimensional cellular automata model in this paper. The model can reproduce the three-phase development, i.e., the acute period, the asymptotic period and the AIDS period, observed in the HIV-infected patients in a clinic. We show that the 3D HIV model performs a better robustness on the model parameters than the 2D cellular automata. Furthermore, we reveal that the occurrence of a perpetual source to successively generate infectious waves to spread to the whole system drives the model from the asymptotic state to the AIDS state.
Computational Modeling Method for Superalloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John
1997-01-01
Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.
Climate Modeling Computing Needs Assessment
NASA Astrophysics Data System (ADS)
Petraska, K. E.; McCabe, J. D.
2011-12-01
This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.
Biochemical Applications Of 3-Dimensional Fluorescence Spectrometry
NASA Astrophysics Data System (ADS)
Leiner, Marc J.; Wolfbeis, Otto S.
1988-06-01
We investigated the 3-dimensional fluorescence of complex mixtures of bioloquids such as human serum, serum ultrafiltrate, human urine, and human plasma low density lipoproteins. The total fluorescence of human serum can be divided into a few peaks. When comparing fluorescence topograms of sera, from normal and cancerous subjects, we found significant differences in tryptophan fluorescence. Although the total fluorescence of human urine can be resolved into 3-5 distinct peaks, some of them. do not result from single fluorescent urinary metabolites, but rather from. several species having similar spectral properties. Human plasma, low density lipoproteins possess a native fluorescence that changes when submitted to in-vitro autoxidation. The 3-dimensional fluorescence demonstrated the presence of 7 fluorophores in the lipid domain, and 6 fluorophores in the protein. dovain- The above results demonstrated that 3-dimensional fluorescence can resolve the spectral properties of complex ,lxtures much better than other methods. Moreover, other parameters than excitation and emission wavelength and intensity (for instance fluorescence lifetime, polarization, or quenchability) may be exploited to give a multidl,ensio,a1 matrix, that is unique for each sample. Consequently, 3-dimensio:Hhal fluorescence as such, or in combination with separation techniques is therefore considered to have the potential of becoming a useful new H.ethod in clinical chemistry and analytical biochemistry.
Hydronic distribution system computer model
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Computational Modeling for Bedside Application
Kerckhoffs, Roy C.P.; Narayan, Sanjiv M.; Omens, Jeffrey H.; Mulligan, Lawrence J.; McCulloch, Andrew D.
2008-01-01
With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988
3-dimensional strain fields from tomographic measurements
NASA Astrophysics Data System (ADS)
Haldrup, K.; Nielsen, S. F.; Mishnaevsky, L., Jr.; Beckmann, F.; Wert, J. A.
2006-08-01
Understanding the distributions of strain within solid bodies undergoing plastic deformations has been of interest for many years in a wide range of disciplines, ranging from basic materials science to biology. However, the desire to investigate these strain fields has been frustrated by the inaccessibility of the interior of most samples to detailed investigation without destroying the sample in the process. To some extent, this has been remedied by the development of advanced surface measurement techniques as well as computer models based on Finite Element methods. Over the last decade, this situation has changed by the introduction of a range of tomographic methods based both on advances in computer technology and in instrumentation, advances which have opened up the interior of optically opaque samples for detailed investigations. We present a general method for assessing the strain in the interior of marker-containing specimens undergoing various types of deformation. The results are compared with Finite Element modelling.
Visualizing ultrasound through computational modeling
NASA Technical Reports Server (NTRS)
Guo, Theresa W.
2004-01-01
The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.
Parallel computing in enterprise modeling.
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)
1994-01-01
The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.
Application of three-dimensional computer modeling for reservoir and ore-body analysis
Hamilton, D.E.; Marie, J.L.; Moon, G.M.; Moretti, F.J.; Ryman, W.P.; Didur, R.S.
1985-02-01
Three-dimensional computer modeling of reservoirs and ore bodies aids in understanding and exploiting these resources. This modeling tool enables the geologist and engineer to correlate in 3 dimensions, experiment with various geologic interpretations, combine variables to enhance certain geologic attributes, test for reservoir heterogeneities and continuity, select drill sites or perforation zones, determine volumes, plan production, generate geologic parameters for input to flow simulators, calculate tonnages and ore-waste ratios, and test sensitivity of reserves to various ore-grade cutoffs and economic parameters. All applications benefit from the ability to update rapidly the 3-dimensional computer models when new data are collected. Two 3-dimensional computer modeling projects demonstrate these capabilities. The first project involves modeling porosity, permeability, and water saturation in a Malaysian reservoir. The models were used to analyze the relationship between water saturation and porosity and to generate geologic parameters for input to a flow simulator. The second project involves modeling copper, zinc, silver, gold, and specific gravity in a massive sulfide ore body in British Columbia. The 4 metal models were combined into one copper-equivalence model and evaluated for tonnage, stripping ratio, and sensitivity to variations of ore-grade cutoff.
MODEL IDENTIFICATION AND COMPUTER ALGEBRA.
Bollen, Kenneth A; Bauldry, Shawn
2010-10-01
Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods. PMID:21769158
Los Alamos Center for Computer Security formal computer security model
Dreicer, J.S.; Hunteman, W.J.; Markin, J.T.
1989-01-01
This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The need to test and verify DOE computer security policy implementation first motivated this effort. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present formal mathematical models for computer security. The fundamental objective of computer security is to prevent the unauthorized and unaccountable access to a system. The inherent vulnerabilities of computer systems result in various threats from unauthorized access. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The model is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell and LaPadula abstract sets of objects and subjects. 6 refs.
Computer modeling of piezoresistive gauges
Nutt, G. L.; Hallquist, J. O.
1981-08-07
A computer model of a piezoresistive gauge subject to shock loading is developed. The time-dependent two-dimensional response of the gauge is calculated. The stress and strain components of the gauge are determined assuming elastic-plastic material properties. The model is compared with experiment for four cases. An ytterbium foil gauge in a PPMA medum subjected to a 0.5 Gp plane shock wave, where the gauge is presented to the shock with its flat surface both parallel and perpendicular to the front. A similar comparison is made for a manganin foil subjected to a 2.7 Gp shock. The signals are compared also with a calibration equation derived with the gauge and medium properties accounted for but with the assumption that the gauge is in stress equilibrium with the shocked medium.
A Computationally Efficient Bedrock Model
NASA Astrophysics Data System (ADS)
Fastook, J. L.
2002-05-01
Full treatments of the Earth's crust, mantle, and core for ice sheet modeling are often computationally overwhelming, in that the requirements to calculate a full self-gravitating spherical Earth model for the time-varying load history of an ice sheet are considerably greater than the computational requirements for the ice dynamics and thermodynamics combined. For this reason, we adopt a ``reasonable'' approximation for the behavior of the deforming bedrock beneath the ice sheet. This simpler model of the Earth treats the crust as an elastic plate supported from below by a hydrostatic fluid. Conservation of linear and angular momentum for an elastic plate leads to the classical Poisson-Kirchhoff fourth order differential equation in the crustal displacement. By adding a time-dependent term this treatment allows for an exponentially-decaying response of the bed to loading and unloading events. This component of the ice sheet model (along with the ice dynamics and thermodynamics) is solved using the Finite Element Method (FEM). C1 FEMs are difficult to implement in more than one dimension, and as such the engineering community has turned away from classical Poisson-Kirchhoff plate theory to treatments such as Reissner-Mindlin plate theory, which are able to accommodate transverse shear and hence require only C0 continuity of basis functions (only the function, and not the derivative, is required to be continuous at the element boundary) (Hughes 1987). This method reduces the complexity of the C1 formulation by adding additional degrees of freedom (the transverse shear in x and y) at each node. This ``reasonable'' solution is compared with two self-gravitating spherical Earth models (1. Ivins et al. (1997) and James and Ivins (1998) } and 2. Tushingham and Peltier 1991 ICE3G run by Jim Davis and Glenn Milne), as well as with preliminary results of residual rebound rates measured with GPS by the BIFROST project. Modeled responses of a simulated ice sheet experiencing a
A Model Computer Literacy Course.
ERIC Educational Resources Information Center
Orndorff, Joseph
Designed to address the varied computer skill levels of college students, this proposed computer literacy course would be modular in format, with modules tailored to address various levels of expertise and permit individualized instruction. An introductory module would present both the history and future of computers and computing, followed by an…
A Computational Theory of Modelling
NASA Astrophysics Data System (ADS)
Rossberg, Axel G.
2003-04-01
A metatheory is developed which characterizes the relationship between a modelled system, which complies with some ``basic theory'', and a model, which does not, and yet reproduces important aspects of the modelled system. A model is represented by an (in a certain sense, s.b.) optimal algorithm which generates data that describe the model's state or evolution complying with a ``reduced theory''. Theories are represented by classes of (in a similar sense, s.b.) optimal algorithms that test if their input data comply with the theory. The metatheory does not prescribe the formalisms (data structure, language) to be used for the description of states or evolutions. Transitions to other formalisms and loss of accuracy, common to theory reduction, are explicitly accounted for. The basic assumption of the theory is that resources such as the code length (~ programming time) and the computation time for modelling and testing are costly, but the relative cost of each recourse is unknown. Thus, if there is an algorithm a for which there is no other algorithm b solving the same problem but using less of each recourse, then a is considered optimal. For tests (theories), the set X of wrongly admitted inputs is treated as another resource. It is assumed that X1 is cheaper than X2 when X1 ⊂ X2 (X1 ≠ X2). Depending on the problem, the algorithmic complexity of a reduced theory can be smaller or larger than that of the basic theory. The theory might help to distinguish actual properties of complex systems from mere mental constructs. An application to complex spatio-temporal patterns is discussed.
Computational model for chromosomal instabilty
NASA Astrophysics Data System (ADS)
Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina
2015-03-01
Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.
Computational modeling of membrane proteins
Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.
2014-01-01
The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688
Cupola Furnace Computer Process Model
Seymour Katz
2004-12-31
The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).
Predictive models and computational toxicology.
Knudsen, Thomas; Martin, Matthew; Chandler, Kelly; Kleinstreuer, Nicole; Judson, Richard; Sipes, Nisha
2013-01-01
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was launched by EPA in 2007 and is part of the federal Tox21 consortium to develop a cost-effective approach for efficiently prioritizing the toxicity testing of thousands of chemicals and the application of this information to assessing human toxicology. ToxCast addresses this problem through an integrated workflow using high-throughput screening (HTS) of chemical libraries across more than 650 in vitro assays including biochemical assays, human cells and cell lines, and alternative models such as mouse embryonic stem cells and zebrafish embryo development. The initial phase of ToxCast profiled a library of 309 environmental chemicals, mostly pesticidal actives having rich in vivo data from guideline studies that include chronic/cancer bioassays in mice and rats, multigenerational reproductive studies in rats, and prenatal developmental toxicity endpoints in rats and rabbits. The first phase of ToxCast was used to build models that aim to determine how well in vivo animal effects can be predicted solely from the in vitro data. Phase I is now complete and both the in vitro data (ToxCast) and anchoring in vivo database (ToxRefDB) have been made available to the public (http://actor.epa.gov/). As Phase II of ToxCast is now underway, the purpose of this chapter is to review progress to date with ToxCast predictive modeling, using specific examples on developmental and reproductive effects in rats and rabbits with lessons learned during Phase I. PMID:23138916
Disciplines, models, and computers: the path to computational quantum chemistry.
Lenhard, Johannes
2014-12-01
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990. PMID:25571750
NASA Technical Reports Server (NTRS)
Zhang, Ming
2005-01-01
The primary goal of this project was to perform theoretical calculations of propagation of cosmic rays and energetic particles in 3-dimensional heliospheric magnetic fields. We used Markov stochastic process simulation to achieve to this goal. We developed computation software that can be used to study particle propagation in, as two examples of heliospheric magnetic fields that have to be treated in 3 dimensions, a heliospheric magnetic field suggested by Fisk (1996) and a global heliosphere including the region beyond the termination shock. The results from our model calculations were compared with particle measurements from Ulysses, Earth-based spacecraft such as IMP-8, WIND and ACE, Voyagers and Pioneers in outer heliosphere for tests of the magnetic field models. We particularly looked for features of particle variations that can allow us to significantly distinguish the Fisk magnetic field from the conventional Parker spiral field. The computer code will eventually lead to a new generation of integrated software for solving complicated problems of particle acceleration, propagation and modulation in realistic 3-dimensional heliosphere of realistic magnetic fields and the solar wind with a single computation approach.
The Fermilab Central Computing Facility architectural model
Nicholls, J.
1989-05-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.
Fast Computation of the Inverse CMH Model
NASA Technical Reports Server (NTRS)
Patel, Umesh D.; Torre, Edward Della; Day, John H. (Technical Monitor)
2001-01-01
A fast computational method based on differential equation approach for inverse DOK model has been extended for the inverse CMH model. Also, a cobweb technique for calculating the inverse CMH model is also presented. The two techniques are differed from the point of view of flexibility and computation time.
Lower bounds on the computational efficiency of optical computing systems
NASA Astrophysics Data System (ADS)
Barakat, Richard; Reif, John
1987-03-01
A general model for determining the computational efficiency of optical computing systems, termed the VLSIO model, is described. It is a 3-dimensional generalization of the wire model of a 2-dimensional VLSI with optical beams (via Gabor's theorem) replacing the wires as communication channels. Lower bounds (in terms of simultaneous volume and time) on the computational resources of the VLSIO are obtained for computing various problems such as matrix multiplication.
Predictive Models and Computational Toxicology
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Computational modeling of vascular anastomoses.
Migliavacca, Francesco; Dubini, Gabriele
2005-06-01
Recent development of computational technology allows a level of knowledge of biomechanical factors in the healthy or pathological cardiovascular system that was unthinkable a few years ago. In particular, computational fluid dynamics (CFD) and computational structural (CS) analyses have been used to evaluate specific quantities, such as fluid and wall stresses and strains, which are very difficult to measure in vivo. Indeed, CFD and CS offer much more variability and resolution than in vitro and in vivo methods, yet computations must be validated by careful comparison with experimental and clinical data. The enormous parallel development of clinical imaging such as magnetic resonance or computed tomography opens a new way toward a detailed patient-specific description of the actual hemodynamics and structural behavior of living tissues. Coupling of CFD/CS and clinical images is becoming a standard evaluation that is expected to become part of the clinical practice in the diagnosis and in the surgical planning in advanced medical centers. This review focuses on computational studies of fluid and structural dynamics of a number of vascular anastomoses: the coronary bypass graft anastomoses, the arterial peripheral anastomoses, the arterio-venous graft anastomoses and the vascular anastomoses performed in the correction of congenital heart diseases. PMID:15772842
Ellis, D; Allaire, J C
1999-09-01
We proposed a mediation model to examine the effects of age, education, computer knowledge, and computer anxiety on computer interest in older adults. We hypothesized that computer knowledge and computer anxiety would fully mediate the effects of age and education on computer interest. A sample of 330 older adults from local senior-citizen apartment buildings completed a survey that included an assessment of the constructs included in the model. Using structural equation modeling, we found that the results supported the hypothesized mediation model. In particular, the effect of computer knowledge operated on computer interest through computer anxiety. The effect of age was not fully mitigated by the other model variables, indicating the need for future research that identifies and models other correlates of age and computer interest. The most immediate application of this research is the finding that a simple 3-item instrument can be used to assess computer interest in older populations. This will help professionals plan and implement computer services in public-access settings for older adults. An additional application of this research is the information it provides for training program designers. PMID:10665203
Climate Modeling using High-Performance Computing
Mirin, A A
2007-02-05
The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
COLD-SAT Dynamic Model Computer Code
NASA Technical Reports Server (NTRS)
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Applications of computer modeling to fusion research
Dawson, J.M.
1989-01-01
Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.
Leverage points in a computer model
NASA Astrophysics Data System (ADS)
Janošek, Michal
2016-06-01
This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.
Protalign: a 3-dimensional protein alignment assessment tool.
Meads, D; Hansen, M D; Pang, A
1999-01-01
Protein fold recognition (sometimes called threading) is the prediction of a protein's 3-dimensional shape based on its similarity to a protein of known structure. Fold predictions are low resolution; that is, no effort is made to rotate the protein's component amino acid side chains into their correct spatial orientations. The goal is simply to recognize the protein family member that most closely resembles the target sequence of unknown structure and to create a sensible alignment of the target to the known structure (i.e., a structure-sequence alignment). To facilitate this type of structure prediction, we have designed a low resolution molecular graphics tool. ProtAlign introduces the ability to interact with and edit alignments directly in the 3-dimensional structure as well as in the usual 2-dimensional layout. It also contains several functions and features to help the user assess areas within the alignment. ProtAlign implements an open pipe architecture to allow other programs to access its molecular graphics capabilities. In addition, it is capable of "driving" other programs. Because amino acid side chain orientation is not relevant in fold recognition, we represent amino acid residues as abstract shapes or glyphs much like Lego (tm) blocks and we borrow techniques from comparative flow visualization using streamlines to provide clean depictions of the entire protein model. By creating a low resolution representation of protein structure, we are able to at least double the amount of information on the screen. At the same time, we create a view that is not as busy as the corresponding representations using traditional high resolution visualization methods which show detailed atomic structure. This eliminates distracting and possibly misleading visual clutter resulting from the mapping of protein alignment information onto a high resolution display of the known structure. This molecular graphics program is implemented in Open GL to facilitate porting to
Model Railroading and Computer Fundamentals
ERIC Educational Resources Information Center
McCormick, John W.
2007-01-01
Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-01-01
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications. PMID:26062616
Joosten, A; Bochud, F; Moeckli, R
2014-08-21
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable
NASA Astrophysics Data System (ADS)
Joosten, A.; Bochud, F.; Moeckli, R.
2014-08-01
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable
3DIVS: 3-Dimensional Immersive Virtual Sculpting
Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E
2001-10-03
Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
Ranked retrieval of Computational Biology models
2010-01-01
Background The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Results Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. Conclusions The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models. PMID:20701772
Computational Medicine: Translating Models to Clinical Care
Winslow, Raimond L.; Trayanova, Natalia; Geman, Donald; Miller, Michael I.
2013-01-01
Because of the inherent complexity of coupled nonlinear biological systems, the development of computational models is necessary for achieving a quantitative understanding of their structure and function in health and disease. Statistical learning is applied to high-dimensional biomolecular data to create models that describe relationships between molecules and networks. Multiscale modeling links networks to cells, organs, and organ systems. Computational approaches are used to characterize anatomic shape and its variations in health and disease. In each case, the purposes of modeling are to capture all that we know about disease and to develop improved therapies tailored to the needs of individuals. We discuss advances in computational medicine, with specific examples in the fields of cancer, diabetes, cardiology, and neurology. Advances in translating these computational methods to the clinic are described, as well as challenges in applying models for improving patient health. PMID:23115356
Comprehensive computational model for thermal plasma processing
NASA Astrophysics Data System (ADS)
Chang, C. H.
A new numerical model is described for simulating thermal plasmas containing entrained particles, with emphasis on plasma spraying applications. The plasma is represented as a continuum multicomponent chemically reacting ideal gas, while the particles are tracked as discrete Lagrangian entities coupled to the plasma. The overall computational model is embodied in a new computer code called LAVA. Computational results are presented from a transient simulation of alumina spraying in a turbulent argon-helium plasma jet in air environment, including torch geometry, substrate, and multiple species with chemical reactions. Plasma-particle interactions including turbulent dispersion have been modeled in a fully self-consistent manner.
Modeling communication in cluster computing
Stoica, I.; Sultan, F.; Keyes, D.
1995-12-01
We introduce a model for communication costs in parallel processing environments, called the {open_quotes}hyperbolic model,{close_quotes} that generalizes two-parameter dedicated-link models in an analytically simple way. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes and internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. A CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. Rules are given for reducing a communication graph consisting of many CBs to an equivalent two-parameter form, while maintaining a good approximation for the service time. In [4] we demonstrate a tight fit of the estimates of the cost of communication based on our model with actual measurements of the communication and synchronization time between end processes. We, also show the compatibility of our model (to within a factor of 3/4) with the recently proposed LogP model.
Teaching Environmental Systems Modelling Using Computer Simulation.
ERIC Educational Resources Information Center
Moffatt, Ian
1986-01-01
A computer modeling course in environmental systems and dynamics is presented. The course teaches senior undergraduates to analyze a system of interest, construct a system flow chart, and write computer programs to simulate real world environmental processes. An example is presented along with a course evaluation, figures, tables, and references.…
Computer modeling of human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Enhanced absorption cycle computer model
NASA Astrophysics Data System (ADS)
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
A new epidemic model of computer viruses
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Yang, Xiaofan
2014-06-01
This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.
Computer Model Buildings Contaminated with Radioactive Material
Energy Science and Technology Software Center (ESTSC)
1998-05-19
The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.
Applications of Computational Modeling in Cardiac Surgery
Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.
2014-01-01
Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036
Statistical mechanical modeling: Computer simulations, analysis and applications
NASA Astrophysics Data System (ADS)
Subramanian, Balakrishna
This thesis describes the applications of statistical mechanical models and tools, especially computational techniques to the study of several problems in science. We study in chapter 2, various properties of a non-equilibrium cellular automaton model, the Toom model. We obtain numerically the exponents describing the fluctuations of the interface between the two stable phases of the model. In chapter 3, we introduce a binary alloy model with three-body potentials. Unlike the usual Ising-type models with two-body interactions, this model is not symmetric in its components. We calculate the exact low temperature phase diagram using Pirogov-Sinai theory and also find the mean-field equilibrium properties of this model. We then study the kinetics of phase segregation following a quenching in this model. We find that the results are very similar to those obtained for Ising-type models with pair interactions, indicating universality. In chapter 4, we discuss the statistical properties of "Contact Maps". These maps, are used to represent three-dimensional structures of proteins in modeling problems. We find that this representation space has particular properties that make it a convenient choice. The maps representing native folds of proteins correspond to compact structures which in turn correspond to maps with low degeneracy, making it easier to translate the map into the detailed 3-dimensional structure. The early stage of formation of a river network is described in Chapter 5 using quasi-random spanning trees on a square lattice. We observe that the statistical properties generated by these models are quite similar (better than some of the earlier models) to the empirical laws and results presented by geologists for real river networks. Finally, in chapter 6 we present a brief note on our study of the problem of progression of heterogeneous breast tumors. We investigate some of the possible pathways of progression based on the traditional notions of DCIS (Ductal
A rotational stereoscopic 3-dimensional movement aftereffect.
Webster, W R; Panthradil, J T; Conway, D M
1998-06-01
A stereoscopic rotational movement aftereffect (MAE) and a stereoscopic bi-directional MAE were generated by rotation of a cyclopean random dot cylinder in depth and by movement of two cyclopean random dot planes in opposite directions, respectively. Cross-adaptational MAEs were also generated on each other, but not with stimuli lacking any disparity. Cross-adaptation MAEs were generated between stereoscopic and non-stereoscopic random dot stimuli moving in the one X/Y plane. Spontaneous reversals in direction of movement were observed with bistable stimuli lacking disparity. Two models of the middle temporal area were considered which might explain both the stereoscopic MAEs and the spontaneous reversals. PMID:9797953
Computational modeling and multilevel cancer control interventions.
Morrissey, Joseph P; Lich, Kristen Hassmiller; Price, Rebecca Anhang; Mandelblatt, Jeanne
2012-05-01
This chapter presents an overview of computational modeling as a tool for multilevel cancer care and intervention research. Model-based analyses have been conducted at various "beneath the skin" or biological scales as well as at various "above the skin" or socioecological levels of cancer care delivery. We review the basic elements of computational modeling and illustrate its applications in four cancer control intervention areas: tobacco use, colorectal cancer screening, cervical cancer screening, and racial disparities in access to breast cancer care. Most of these models have examined cancer processes and outcomes at only one or two levels. We suggest ways these models can be expanded to consider interactions involving three or more levels. Looking forward, a number of methodological, structural, and communication barriers must be overcome to create useful computational models of multilevel cancer interventions and population health. PMID:22623597
Economic Analysis. Computer Simulation Models.
ERIC Educational Resources Information Center
Sterling Inst., Washington, DC. Educational Technology Center.
A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…
Computational study of lattice models
NASA Astrophysics Data System (ADS)
Zujev, Aleksander
This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.
Computational Methods to Model Persistence.
Vandervelde, Alexandra; Loris, Remy; Danckaert, Jan; Gelens, Lendert
2016-01-01
Bacterial persister cells are dormant cells, tolerant to multiple antibiotics, that are involved in several chronic infections. Toxin-antitoxin modules play a significant role in the generation of such persister cells. Toxin-antitoxin modules are small genetic elements, omnipresent in the genomes of bacteria, which code for an intracellular toxin and its neutralizing antitoxin. In the past decade, mathematical modeling has become an important tool to study the regulation of toxin-antitoxin modules and their relation to the emergence of persister cells. Here, we provide an overview of several numerical methods to simulate toxin-antitoxin modules. We cover both deterministic modeling using ordinary differential equations and stochastic modeling using stochastic differential equations and the Gillespie method. Several characteristics of toxin-antitoxin modules such as protein production and degradation, negative autoregulation through DNA binding, toxin-antitoxin complex formation and conditional cooperativity are gradually integrated in these models. Finally, by including growth rate modulation, we link toxin-antitoxin module expression to the generation of persister cells. PMID:26468111
Parallel computing in atmospheric chemistry models
Rotman, D.
1996-02-01
Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.
A Computational Framework for Realistic Retina Modeling.
Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco
2016-11-01
Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas. PMID:27354192
Computer Modeling of Direct Metal Laser Sintering
NASA Technical Reports Server (NTRS)
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Computing a Comprehensible Model for Spam Filtering
NASA Astrophysics Data System (ADS)
Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael
In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone
A computational model of the cerebellum
Travis, B.J.
1990-01-01
The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.
Mechanistic models in computational social science
NASA Astrophysics Data System (ADS)
Holme, Petter; Liljeros, Fredrik
2015-09-01
Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.
Teaching Forest Planning with Computer Models
ERIC Educational Resources Information Center
Howard, Richard A.; Magid, David
1977-01-01
This paper describes a series of FORTRAN IV computer models which are the content-oriented subject matter for a college course in forest planning. The course objectives, the planning problem, and the ten planning aid models are listed. Student comments and evaluation of the program are included. (BT)
Parallel computing for automated model calibration
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.; Vail, Lance W.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. So far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.
Computer Modeling and Visualization in Design Technology: An Instructional Model.
ERIC Educational Resources Information Center
Guidera, Stan
2002-01-01
Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
Concepts to accelerate water balance model computation
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Casper, Markus; Gemmar, Peter
2010-05-01
Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be
CDF computing and event data models
Snider, F.D.; /Fermilab
2005-12-01
The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future. PMID:17835348
Computational modelling of schizophrenic symptoms: basic issues.
Tretter, F; an der Heiden, U; Rujescu, D; Pogarell, O
2012-05-01
Emerging "(computational) systems medicine" challenges neuropsychiatry regarding the development of heuristic computational brain models which help to explore symptoms and syndromes of mental disorders. This methodology of exploratory modelling of mental functions and processes and of their pathology requires a clear and operational definition of the target variable (explanandum). In the case of schizophrenia, a complex and heterogeneous disorder, single psychopathological key symptoms such as working memory deficiency, hallucination or delusion need to be defined first. Thereafter, measures of brain structures can be used in a multilevel view as biological correlates of these symptoms. Then, in order to formally "explain" the symptoms, a qualitative model can be constructed. In another step, numerical values have to be integrated into the model and exploratory computer simulations can be performed. Normal and pathological functioning is to be tested in computer experiments allowing the formulation of new hypotheses and questions for empirical research. However, the crucial challenge is to point out the appropriate degree of complexity (or simplicity) of these models, which is required in order to achieve an epistemic value that might lead to new hypothetical explanatory models and could stimulate new empirical and theoretical research. Some outlines of these methodological issues are discussed here, regarding the fact that measurements alone are not sufficient to build models. PMID:22565230
Do's and Don'ts of Computer Models for Planning
ERIC Educational Resources Information Center
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
Solving stochastic epidemiological models using computer algebra
NASA Astrophysics Data System (ADS)
Hincapie, Doracelly; Ospina, Juan
2011-06-01
Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Computational modeling of laser-tissue interaction
London, R.A.; Amendt, P.; Bailey, D.S.; Eder, D.C.; Maitland, D.J.; Glinsky, M.E.; Strauss, M.; Zimmerman, G.B.
1996-05-01
Computational modeling can play an important role both in designing laser-tissue interaction experiments and in understanding the underlying mechanisms. This can lead to more rapid and less expensive development if new procedures and instruments, and a better understanding of their operation. We have recently directed computer programs and associated expertise developed over many years to model high intensity laser-matter interactions for fusion research towards laser-tissue interaction problem. A program called LATIS is being developed to specifically treat laser-tissue interaction phenomena, such as highly scattering light transport, thermal coagulation, and hydrodynamic motion.
Computational algebraic geometry of epidemic models
NASA Astrophysics Data System (ADS)
Rodríguez Vega, Martín.
2014-06-01
Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.
Computer modeling of commercial refrigerated warehouse facilities
Nicoulin, C.V.; Jacobs, P.C.; Tory, S.
1997-07-01
The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented.
Video Based Sensor for Tracking 3-Dimensional Targets
NASA Technical Reports Server (NTRS)
Howard, R. T.; Book, Michael L.; Bryan, Thomas C.
2000-01-01
Video-Based Sensor for Tracking 3-Dimensional Targets The National Aeronautics and Space Administration's (NASAs) Marshall Space Flight Center (MSFC) has been developing and testing video-based sensors for automated spacecraft guidance for several years, and the next generation of video sensor will have tracking rates up to 100 Hz and will be able to track multiple reflectors and targets. The Video Guidance Sensor (VGS) developed over the past several years has performed well in testing and met the objective of being used as the terminal guidance sensor for an automated rendezvous and capture system. The first VGS was successfully tested in closed-loop 3-degree-of-freedom (3- DOF) tests in 1989 and then in 6-DOF open-loop tests in 1992 and closed-loop tests in 1993-4. Development and testing continued, and in 1995 approval was given to test the VGS in an experiment on the Space Shuttle. The VGS flew in 1997 and in 1998, performing well for both flights. During the development and testing before, during, and after the flight experiments, numerous areas for improvement were found. The VGS was developed with a sensor head and an electronics box, connected by cables. The VGS was used in conjunction with a target that had wavelength-filtered retro-reflectors in a specific pattern, The sensor head contained the laser diodes, video camera, and heaters and coolers. The electronics box contained a frame grabber, image processor, the electronics to control the components in the sensor head, the communications electronics, and the power supply. The system works by sequentially firing two different wavelengths of laser diodes at the target and processing the two images. Since the target only reflects one wavelength, it shows up well in one image and not at all in the other. Because the target's dimensions are known, the relative positions and attitudes of the target and the sensor can be computed from the spots reflected from the target. The system was designed to work from I
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Computational Spectrum of Agent Model Simulation
Perumalla, Kalyan S
2010-01-01
The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. PMID:27137437
Global detailed geoid computation and model analysis
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Vincent, S.
1974-01-01
Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology. PMID:25790483
Computational modeling of dilute biomass slurries
NASA Astrophysics Data System (ADS)
Sprague, Michael; Stickel, Jonathan; Fischer, Paul; Lischeske, James
2012-11-01
The biochemical conversion of lignocellulosic biomass to liquid transportation fuels involves a multitude of physical and chemical transformations that occur in several distinct processing steps (e.g., pretreatment, enzymatic hydrolysis, and fermentation). In this work we focus on development of a computational fluid dynamics model of a dilute biomass slurry, which is a highly viscous particle-laden fluid that can exhibit yield-stress behavior. Here, we model the biomass slurry as a generalized Newtonian fluid that accommodates biomass transport due to settling and biomass-concentration-dependent viscosity. Within a typical mixing vessel, viscosity can vary over several orders of magnitude. We solve the model with the Nek5000 spectral-finite-element solver in a simple vane mixer, and validate against experimental results. This work is directed towards our goal of a fully coupled computational model of fluid dynamics and reaction kinetics for the enzymatic hydrolysis of lignocellulosic biomass.
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
Computer Modelling of Photochemical Smog Formation
ERIC Educational Resources Information Center
Huebert, Barry J.
1974-01-01
Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Images as a basis for computer modelling
NASA Astrophysics Data System (ADS)
Beaufils, D.; LeTouzé, J.-C.; Blondel, F.-M.
1994-03-01
New computer technologies such as the graphics data tablet, video digitization and numerical methods, can be used for measurement and mathematical modelling in physics. Two programs dealing with newtonian mechanics and some of related scientific activities for A-level students are described.
Informing Mechanistic Toxicology with Computational Molecular Models
Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
The 3-dimensional construction of the Rae craton, central Canada
NASA Astrophysics Data System (ADS)
Snyder, David B.; Craven, James A.; Pilkington, Mark; Hillier, Michael J.
2015-10-01
Reconstruction of the 3-dimensional tectonic assembly of early continents, first as Archean cratons and then Proterozoic shields, remains poorly understood. In this paper, all readily available geophysical and geochemical data are assembled in a 3-D model with the most accurate bedrock geology in order to understand better the geometry of major structures within the Rae craton of central Canada. Analysis of geophysical observations of gravity and seismic wave speed variations revealed several lithospheric-scale discontinuities in physical properties. Where these discontinuities project upward to correlate with mapped upper crustal geological structures, the discontinuities can be interpreted as shear zones. Radiometric dating of xenoliths provides estimates of rock types and ages at depth beneath sparse kimberlite occurrences. These ages can also be correlated to surface rocks. The 3.6-2.6 Ga Rae craton comprises at least three smaller continental terranes, which "cratonized" during a granitic bloom. Cratonization probably represents final differentiation of early crust into a relatively homogeneous, uniformly thin (35-42 km), tonalite-trondhjemite-granodiorite crust with pyroxenite layers near the Moho. The peak thermotectonic event at 1.86-1.7 Ga was associated with the Hudsonian orogeny that assembled several cratons and lesser continental blocks into the Canadian Shield using a number of southeast-dipping megathrusts. This orogeny metasomatized, mineralized, and recrystallized mantle and lower crustal rocks, apparently making them more conductive by introducing or concentrating sulfides or graphite. Little evidence exists of thin slabs similar to modern oceanic lithosphere in this Precambrian construction history whereas underthrusting and wedging of continental lithosphere is inferred from multiple dipping discontinuities.
A 3-Dimensional Anatomic Study of the Distal Biceps Tendon
Walton, Christine; Li, Zhi; Pennings, Amanda; Agur, Anne; Elmaraghy, Amr
2015-01-01
Background Complete rupture of the distal biceps tendon from its osseous attachment is most often treated with operative intervention. Knowledge of the overall tendon morphology as well as the orientation of the collagenous fibers throughout the musculotendinous junction are key to intraoperative decision making and surgical technique in both the acute and chronic setting. Unfortunately, there is little information available in the literature. Purpose To comprehensively describe the morphology of the distal biceps tendon. Study Design Descriptive laboratory study. Methods The distal biceps terminal musculature, musculotendinous junction, and tendon were digitized in 10 cadaveric specimens and data reconstructed using 3-dimensional modeling. Results The average length, width, and thickness of the external distal biceps tendon were found to be 63.0, 6.0, and 3.0 mm, respectively. A unique expansion of the tendon fibers within the distal muscle was characterized, creating a thick collagenous network along the central component between the long and short heads. Conclusion This study documents the morphologic parameters of the native distal biceps tendon. Reconstruction may be necessary, especially in chronic distal biceps tendon ruptures, if the remaining tendon morphology is significantly compromised compared with the native distal biceps tendon. Knowledge of normal anatomical distal biceps tendon parameters may also guide the selection of a substitute graft with similar morphological characteristics. Clinical Relevance A thorough description of distal biceps tendon morphology is important to guide intraoperative decision making between primary repair and reconstruction and to better select the most appropriate graft. The detailed description of the tendinous expansion into the muscle may provide insight into better graft-weaving and suture-grasping techniques to maximize proximal graft incorporation. PMID:26665092
Automating sensitivity analysis of computer models using computer calculus
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.
Computational Modeling: From Remote Sensing to Quantum Computing
NASA Astrophysics Data System (ADS)
Healy, Dennis
2001-03-01
Recent DARPA investments have contributed to significant advances in numerically sound and computationally efficient physics-based modeling, enabling a wide variety of applications of critical interest to the DoD and Industry. Specific examples may be found in a wide variety of applications ranging from the design and operation of advanced synthetic aperture radar systems to the virtual integrated prototyping of reactors and control loops for the manufacture of thin-film functional material systems. This talk will survey the development and application of well-conditioned fast operators for particular physical problems and their critical contributions to various real world problems. We'll conclude with an indication of how these methods may contribute to exploring the revolutionary potential of quantum information theory.
Differential Cross Section Kinematics for 3-dimensional Transport Codes
NASA Technical Reports Server (NTRS)
Norbury, John W.; Dick, Frank
2008-01-01
In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.
Controlled teleportation of a 3-dimensional bipartite quantum state
NASA Astrophysics Data System (ADS)
Cao, Hai-Jing; Chen, Zhong-Hua; Song, He-Shan
2008-07-01
A controlled teleportation scheme of an unknown 3-dimensional (3D) two-particle quantum state is proposed, where a 3D Bell state and 3D GHZ state function as the quantum channel. This teleportation scheme can be directly generalized to teleport an unknown d-dimensional bipartite quantum state.
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
Computer Model Predicts the Movement of Dust
NASA Technical Reports Server (NTRS)
2002-01-01
A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center
Computational models of natural language processing
Bara, B.G.; Guida, G.
1984-01-01
The main concern in this work is the illustration of models for natural language processing, and the discussion of their role in the development of computational studies of language. Topics covered include the following: competence and performance in the design of natural language systems; planning and understanding speech acts by interpersonal games; a framework for integrating syntax and semantics; knowledge representation and natural language: extending the expressive power of proposition nodes; viewing parsing as word sense discrimination: a connectionist approach; a propositional language for text representation; from topic and focus of a sentence to linking in a text; language generation by computer; understanding the Chinese language; semantic primitives or meaning postulates: mental models of propositional representations; narrative complexity based on summarization algorithms; using focus to constrain language generation; and towards an integral model of language competence.
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.