Superimposition of 3-dimensional cone-beam computed tomography models of growing patients
Cevidanes, Lucia H. C.; Heymann, Gavin; Cornelis, Marie A.; DeClerck, Hugo J.; Tulloch, J. F. Camilla
2009-01-01
Introduction The objective of this study was to evaluate a new method for superimposition of 3-dimensional (3D) models of growing subjects. Methods Cone-beam computed tomography scans were taken before and after Class III malocclusion orthopedic treatment with miniplates. Three observers independently constructed 18 3D virtual surface models from cone-beam computed tomography scans of 3 patients. Separate 3D models were constructed for soft-tissue, cranial base, maxillary, and mandibular surfaces. The anterior cranial fossa was used to register the 3D models of before and after treatment (about 1 year of follow-up). Results Three-dimensional overlays of superimposed models and 3D color-coded displacement maps allowed visual and quantitative assessment of growth and treatment changes. The range of interobserver errors for each anatomic region was 0.4 mm for the zygomatic process of maxilla, chin, condyles, posterior border of the rami, and lower border of the mandible, and 0.5 mm for the anterior maxilla soft-tissue upper lip. Conclusions Our results suggest that this method is a valid and reproducible assessment of treatment outcomes for growing subjects. This technique can be used to identify maxillary and mandibular positional changes and bone remodeling relative to the anterior cranial fossa. PMID:19577154
The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education
Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki
2012-01-01
Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759
Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models
Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.
2016-01-01
IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE
Image analysis and superimposition of 3-dimensional cone-beam computed tomography models
Cevidanes, Lucia H. S.; Styner, Martin A.; Proffit, William R.
2013-01-01
Three-dimensional (3D) imaging techniques can provide valuable information to clinicians and researchers. But as we move from traditional 2-dimensional (2D) cephalometric analysis to new 3D techniques, it is often necessary to compare 2D with 3D data. Cone-beam computed tomography (CBCT) provides simulation tools that can help bridge the gap between image types. CBCT acquisitions can be made to simulate panoramic, lateral, and posteroanterior cephalometric radioagraphs so that they can be compared with preexisting cephalometric databases. Applications of 3D imaging in orthodontics include initial diagnosis and superimpositions for assessing growth, treatment changes, and stability. Three-dimensional CBCT images show dental root inclination and torque, impacted and supernumerary tooth positions, thickness and morphology of bone at sites of mini-implants for anchorage, and osteotomy sites in surgical planning. Findings such as resorption, hyperplasic growth, displacement, shape anomalies of mandibular condyles, and morphological differences between the right and left sides emphasize the diagnostic value of computed tomography acquisitions. Furthermore, relationships of soft tissues and the airway can be assessed in 3 dimensions. PMID:16679201
3-Dimensional Topographic Models for the Classroom
NASA Technical Reports Server (NTRS)
Keller, J. W.; Roark, J. H.; Sakimoto, S. E. H.; Stockman, S.; Frey, H. V.
2003-01-01
We have recently undertaken a program to develop educational tools using 3-dimensional solid models of digital elevation data acquired by the Mars Orbital Laser Altimeter (MOLA) for Mars as well as a variety of sources for elevation data of the Earth. This work is made possible by the use of rapid prototyping technology to construct solid 3-Dimensional models of science data. We recently acquired rapid prototyping machine that builds 3-dimensional models in extruded plastic. While the machine was acquired to assist in the design and development of scientific instruments and hardware, it is also fully capable of producing models of spacecraft remote sensing data. We have demonstrated this by using Mars Orbiter Laser Altimeter (MOLA) topographic data and Earth based topographic data to produce extruded plastic topographic models which are visually appealing and instantly engage those who handle them.
NASA Technical Reports Server (NTRS)
Gibson, S. G.
1983-01-01
A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.
Incorporating 3-dimensional models in online articles
Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz
2015-01-01
Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can
Mandibular reconstruction using stereolithographic 3-dimensional printing modeling technology.
Cohen, Adir; Laviv, Amir; Berman, Phillip; Nashef, Rizan; Abu-Tair, Jawad
2009-11-01
Mandibular reconstruction can be challenging for the surgeon wishing to restore its unique geometry. Reconstruction can be achieved with titanium bone plates followed by autogenous bone grafting. Incorporation of the bone graft into the mandible provides continuity and strength required for proper esthetics and function and permitting dental implant rehabilitation at a later stage. Precious time in the operating room is invested in plate contouring to reconstruct the mandible. Rapid prototyping technologies can construct physical models from computer-aided design via 3-dimensional (3D) printers. A prefabricated 3D model is achieved, which assists in accurate contouring of plates and/or planning of bone graft harvest geometry before surgery. The 2 most commonly used rapid prototyping technologies are stereolithography and 3D printing (3DP). Three-dimensional printing is advantageous to stereolithography for better accuracy, quicker printing time, and lower cost. We present 3 clinical cases based on 3DP modeling technology. Models were fabricated before the resection of mandibular ameloblastoma and were used to prepare bridging plates before the first stage of reconstruction. In 1 case, another model was fabricated and used as a template for iliac crest bone graft in the second stage of reconstruction. The 3DP technology provided a precise, fast, and cheap mandibular reconstruction, which aids in shortened operation time (and therefore decreased exposure time to general anesthesia, decreased blood loss, and shorter wound exposure time) and easier surgical procedure.
Development and Validation of a 3-Dimensional CFB Furnace Model
NASA Astrophysics Data System (ADS)
Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti
At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
MAPAG: a computer program to construct 2- and 3-dimensional antigenic maps.
Aguilar, R C; Retegui, L A; Roguin, L P
1994-01-01
The contact area between an antibody (Ab) and the antigen (Ag) is called antigenic determinant or epitope. The first step in the characterization of an Ag by using monoclonal antibodies (MAb) is to map the relative distribution of the corresponding epitopes on the Ag surface. The computer program MAPAG has been devised to automatically construct antigenic maps. MAPAG is fed with a binary matrix of experimental data indicating the ability of paired MAb to bind or not simultaneously to the Ag. The program is interactive menu-driven and allows the user an easy data handling. MAPAG utilizes iterative processes to construct and to adjust the final map, which is graphically shown as a 2- or a 3-dimensional model. Additionally, the antigenic map obtained can be optionally modified by the user or readjusted by the program. The suitability of MAPAG was illustrated by running experimental data from literature and comparing antigenic maps constructed by the program with those elaborated by the investigators without the assistance of a computer. Furthermore, since some MAb could present negative allosteric effects leading to misinterpretation of data, MAPAG has been provided with an approximate reasoning module to solve such anomalous situations. Results indicated that the program can be successfully employed as a simple, fast and reliable antigenic model-builder.
Particle trajectory computation on a 3-dimensional engine inlet. Final Report Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, J. J.
1986-01-01
A 3-dimensional particle trajectory computer code was developed to compute the distribution of water droplet impingement efficiency on a 3-dimensional engine inlet. The computed results provide the essential droplet impingement data required for the engine inlet anti-icing system design and analysis. The droplet trajectories are obtained by solving the trajectory equation using the fourth order Runge-Kutta and Adams predictor-corrector schemes. A compressible 3-D full potential flow code is employed to obtain a cylindrical grid definition of the flowfield on and about the engine inlet. The inlet surface is defined mathematically through a system of bi-cubic parametric patches in order to compute the droplet impingement points accurately. Analysis results of the 3-D trajectory code obtained for an axisymmetric droplet impingement problem are in good agreement with NACA experimental data. Experimental data are not yet available for the engine inlet impingement problem analyzed. Applicability of the method to solid particle impingement problems, such as engine sand ingestion, is also demonstrated.
3-dimensional modeling of transcranial magnetic stimulation: Design and application
NASA Astrophysics Data System (ADS)
Salinas, Felipe Santiago
Over the past three decades, transcranial magnetic stimulation (TMS) has emerged as an effective tool for many research, diagnostic and therapeutic applications in humans. TMS delivers highly localized brain stimulations via non-invasive externally applied magnetic fields. This non-invasive, painless technique provides researchers and clinicians a unique tool capable of stimulating both the central and peripheral nervous systems. However, a complete analysis of the macroscopic electric fields produced by TMS has not yet been performed. In this dissertation, we present a thorough examination of the total electric field induced by TMS in air and a realistic head model with clinically relevant coil poses. In the first chapter, a detailed account of TMS coil wiring geometry was shown to provide significant improvements in the accuracy of primary E-field calculations. Three-dimensional models which accounted for the TMS coil's wire width, height, shape and number of turns clearly improved the fit of calculated-to-measured E-fields near the coil body. Detailed primary E-field models were accurate up to the surface of the coil body (within 0.5% of measured values) whereas simple models were often inadequate (up to 32% different from measured). In the second chapter, we addressed the importance of the secondary E-field created by surface charge accumulation during TMS using the boundary element method (BEM). 3-D models were developed using simple head geometries in order to test the model and compare it with measured values. The effects of tissue geometry, size and conductivity were also investigated. Finally, a realistic head model was used to assess the effect of multiple surfaces on the total E-field. We found that secondary E-fields have the greatest impact at areas in close proximity to each tissue layer. Throughout the head, the secondary E-field magnitudes were predominantly between 25% and 45% of the primary E-fields magnitude. The direction of the secondary E
Gálvez, Jorge A; Gralewski, Kevin; McAndrew, Christine; Rehman, Mohamed A; Chang, Benjamin; Levin, L Scott
2016-03-01
Children are not typically considered for hand transplantation for various reasons, including the difficulty of finding an appropriate donor. Matching donor-recipient hands and forearms based on size is critically important. If the donor's hands are too large, the recipient may not be able to move the fingers effectively. Conversely, if the donor's hands are too small, the appearance may not be appropriate. We present an 8-year-old child evaluated for a bilateral hand transplant following bilateral amputation. The recipient forearms and model hands were modeled from computed tomography imaging studies and replicated as anatomic models with a 3-dimensional printer. We modified the scale of the printed hand to produce 3 proportions, 80%, 100% and 120%. The transplant team used the anatomical models during evaluation of a donor for appropriate match based on size. The donor's hand size matched the 100%-scale anatomical model hand and the transplant team was activated. In addition to assisting in appropriate donor selection by the transplant team, the 100%-scale anatomical model hand was used to create molds for prosthetic hands for the donor.
3-dimensional orthodontics visualization system with dental study models and orthopantomograms
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ong, S. H.; Foong, K. W. C.; Dhar, T.
2005-04-01
The aim of this study is to develop a system that provides 3-dimensional visualization of orthodontic treatments. Dental plaster models and corresponding orthopantomogram (dental panoramic tomogram) are first digitized and fed into the system. A semi-auto segmentation technique is applied to the plaster models to detect the dental arches, tooth interstices and gum margins, which are used to extract individual crown models. 3-dimensional representation of roots, generated by deforming generic tooth models with orthopantomogram using radial basis functions, is attached to corresponding crowns to enable visualization of complete teeth. An optional algorithm to close the gaps between deformed roots and actual crowns by using multi-quadratic radial basis functions is also presented, which is capable of generating smooth mesh representation of complete 3-dimensional teeth. User interface is carefully designed to achieve a flexible system with as much user friendliness as possible. Manual calibration and correction is possible throughout the data processing steps to compensate occasional misbehaviors of automatic procedures. By allowing the users to move and re-arrange individual teeth (with their roots) on a full dentition, this orthodontic visualization system provides an easy and accurate way of simulation and planning of orthodontic treatment. Its capability of presenting 3-dimensional root information with only study models and orthopantomogram is especially useful for patients who do not undergo CT scanning, which is not a routine procedure in most orthodontic cases.
Szałaj, Przemysław; Tang, Zhonghui; Michalski, Paul; Pietal, Michal J; Luo, Oscar J; Sadowski, Michał; Li, Xingwang; Radew, Kamen; Ruan, Yijun; Plewczynski, Dariusz
2016-12-01
ChIA-PET is a high-throughput mapping technology that reveals long-range chromatin interactions and provides insights into the basic principles of spatial genome organization and gene regulation mediated by specific protein factors. Recently, we showed that a single ChIA-PET experiment provides information at all genomic scales of interest, from the high-resolution locations of binding sites and enriched chromatin interactions mediated by specific protein factors, to the low resolution of nonenriched interactions that reflect topological neighborhoods of higher-order chromosome folding. This multilevel nature of ChIA-PET data offers an opportunity to use multiscale 3D models to study structural-functional relationships at multiple length scales, but doing so requires a structural modeling platform. Here, we report the development of 3D-GNOME (3-Dimensional Genome Modeling Engine), a complete computational pipeline for 3D simulation using ChIA-PET data. 3D-GNOME consists of three integrated components: a graph-distance-based heat map normalization tool, a 3D modeling platform, and an interactive 3D visualization tool. Using ChIA-PET and Hi-C data derived from human B-lymphocytes, we demonstrate the effectiveness of 3D-GNOME in building 3D genome models at multiple levels, including the entire genome, individual chromosomes, and specific segments at megabase (Mb) and kilobase (kb) resolutions of single average and ensemble structures. Further incorporation of CTCF-motif orientation and high-resolution looping patterns in 3D simulation provided additional reliability of potential biologically plausible topological structures.
Tho, Nguyen Van; Trang, Le Thi Huyen; Murakami, Yoshitaka; Ogawa, Emiko; Ryujin, Yasushi; Kanda, Rie; Nakagawa, Hiroaki; Goto, Kenichi; Fukunaga, Kentaro; Higami, Yuichi; Seto, Ruriko; Nagao, Taishi; Oguma, Tetsuya; Yamaguchi, Masafumi; Lan, Le Thi Tuyet; Nakano, Yasutaka
2014-01-01
Background It is time-consuming to obtain the square root of airway wall area of the hypothetical airway with an internal perimeter of 10 mm (√Aaw at Pi10), a comparable index of airway dimensions in chronic obstructive pulmonary disease (COPD), from all airways of the whole lungs using 3-dimensional computed tomography (CT) analysis. We hypothesized that √Aaw at Pi10 differs among the five lung lobes and √Aaw at Pi10 derived from one certain lung lobe has a high level of agreement with that derived from the whole lungs in smokers. Methods Pulmonary function tests and chest volumetric CTs were performed in 157 male smokers (102 COPD, 55 non-COPD). All visible bronchial segments from the 3rd to 5th generations were segmented and measured using commercially available 3-dimensional CT analysis software. √Aaw at Pi10 of each lung lobe was estimated from all measurable bronchial segments of that lobe. Results Using a mixed-effects model, √Aaw at Pi10 differed significantly among the five lung lobes (R2 = 0.78, P<0.0001). The Bland-Altman plots show that √Aaw at Pi10 derived from the right or left upper lobe had a high level of agreement with that derived from the whole lungs, while √Aaw at Pi10 derived from the right or left lower lobe did not. Conclusion In male smokers, CT-derived airway wall area differs among the five lung lobes, and airway wall area derived from the right or left upper lobe is representative of the whole lungs. PMID:24865661
3-Dimensional Marine CSEM Modeling by Employing TDFEM with Parallel Solvers
NASA Astrophysics Data System (ADS)
Wu, X.; Yang, T.
2013-12-01
In this paper, parallel fulfillment is developed for forward modeling of the 3-Dimensional controlled source electromagnetic (CSEM) by using time-domain finite element method (TDFEM). Recently, a greater attention rises on research of hydrocarbon (HC) reservoir detection mechanism in the seabed. Since China has vast ocean resources, seeking hydrocarbon reservoirs become significant in the national economy. However, traditional methods of seismic exploration shown a crucial obstacle to detect hydrocarbon reservoirs in the seabed with a complex structure, due to relatively high acquisition costs and high-risking exploration. In addition, the development of EM simulations typically requires both a deep knowledge of the computational electromagnetics (CEM) and a proper use of sophisticated techniques and tools from computer science. However, the complexity of large-scale EM simulations often requires large memory because of a large amount of data, or solution time to address problems concerning matrix solvers, function transforms, optimization, etc. The objective of this paper is to present parallelized implementation of the time-domain finite element method for analysis of three-dimensional (3D) marine controlled source electromagnetic problems. Firstly, we established a three-dimensional basic background model according to the seismic data, then electromagnetic simulation of marine CSEM was carried out by using time-domain finite element method, which works on a MPI (Message Passing Interface) platform with exact orientation to allow fast detecting of hydrocarbons targets in ocean environment. To speed up the calculation process, SuperLU of an MPI (Message Passing Interface) version called SuperLU_DIST is employed in this approach. Regarding the representation of three-dimension seabed terrain with sense of reality, the region is discretized into an unstructured mesh rather than a uniform one in order to reduce the number of unknowns. Moreover, high-order Whitney
Tsukiyama, Atsushi; Tagami, Takashi; Kim, Shiei; Yokota, Hiroyuki
2014-01-01
Computed tomography (CT) is useful for evaluating esophageal foreign bodies and detecting perforation. However, when evaluation is difficult owing to the previous use of barium as a contrast medium, 3-dimensional CT may facilitate accurate diagnosis. A 49-year-old man was transferred to our hospital with the diagnosis of esophageal perforation. Because barium had been used as a contrast medium for an esophagram performed at a previous hospital, horizontal CT and esophageal endoscopy could not be able to identify the foreign body or characterize the lesion. However, 3-dimensional CT clearly revealed an L-shaped foreign body and its anatomical relationships in the mediastinum. Accordingly, we removed the foreign body using an upper gastrointestinal endoscope. The foreign body was the premaxillary bone of a sea bream. The patient was discharged without complications.
Comparison of nonnavigated and 3-dimensional image-based computer navigated balloon kyphoplasty.
Sembrano, Jonathan N; Yson, Sharon C; Polly, David W; Ledonio, Charles Gerald T; Nuckley, David J; Santos, Edward R G
2015-01-01
Balloon kyphoplasty is a common treatment for osteoporotic and pathologic compression fractures. Advantages include minimal tissue disruption, quick recovery, pain relief, and in some cases prevention of progressive sagittal deformity. The benefit of image-based navigation in kyphoplasty has not been established. The goal of this study was to determine whether there is a difference between fluoroscopy-guided balloon kyphoplasty and 3-dimensional image-based navigation in terms of needle malposition rate, cement leakage rate, and radiation exposure time. The authors compared navigated and nonnavigated needle placement in 30 balloon kyphoplasty procedures (47 levels). Intraoperative 3-dimensional image-based navigation was used for needle placement in 21 cases (36 levels); conventional 2-dimensional fluoroscopy was used in the other 9 cases (11 levels). The 2 groups were compared for rates of needle malposition and cement leakage as well as radiation exposure time. Three of 11 (27%) nonnavigated cases were complicated by a malpositioned needle, and 2 of these had to be repositioned. The navigated group had a significantly lower malposition rate (1 of 36; 3%; P=.04). The overall rate of cement leakage was also similar in both groups (P=.29). Radiation exposure time was similar in both groups (navigated, 98 s/level; nonnavigated, 125 s/level; P=.10). Navigated kyphoplasty procedures did not differ significantly from nonnavigated procedures except in terms of needle malposition rate, where navigation may have decreased the need for needle repositioning.
A 3-dimensional model for teaching local flaps using porcine skin.
Hassan, Zahid; Hogg, Fiona; Graham, Ken
2014-10-01
The European Working Time Directive and streamlined training has led to reduced training time. Surgery, as an experience-dependent craft specialty is affected more than other medical specialties. Trainees want to maximize all training opportunities in the clinical setting, and having predeveloped basic skills acquired on a simulated model can facilitate this.Here we describe the use of a novel model to design and raise local flaps in the face and scalp regions. The model consists of mannequin heads draped with porcine skin which is skewered with pins at strategic points to give a 3-dimensional model which closely resembles a cadaveric head.The advantages of this model are that it is life size and incorporates all the relevant anatomical features, which can be drawn on if required.This model was used on a recent course, Intermediate Skills in Plastic Surgery: Flaps Around the Face, at the Royal College of Surgeons England. The trainees found that practicing on the porcine skin gave them an opportunity to master the basics of flap design and implementation.In summary, this innovative 3-dimensional training model has received high levels of satisfaction and is currently as close as we can get to cadaveric dissection without the constraints and cost of using human tissue.
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
Wang, Shuming; Leng, Xu; Zheng, Yaqi; Zhang, Dapeng; Wu, Guofeng
2015-02-01
The concept of prosthesis-guided implantation has been widely accepted for intraoral implant placement, although clinicians do not fully appreciate its use for facial defect restoration. In this clinical report, multiple digital technologies were used to restore a facial defect with prosthesis-guided implantation. A simulation surgery was performed to remove the residual auricular tissue and to ensure the correct position of the mirrored contralateral ear model. The combined application of computed tomography and 3-dimensional photography preserved the position of the mirrored model and facilitated the definitive implant-retained auricular prosthesis.
Maschio, Federico; Pandya, Mirali; Olszewski, Raphael
2016-01-01
Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456
Swanson, Jordan W.; Mitchell, Brianne T.; Wink, Jason A.; Taylor, Jesse A.
2016-01-01
Background: Grading systems of the mandibular deformity in craniofacial microsomia (CFM) based on conventional radiographs have shown low interrater reproducibility among craniofacial surgeons. We sought to design and validate a classification based on 3-dimensional CT (3dCT) that correlates features of the deformity with surgical treatment. Methods: CFM mandibular deformities were classified as normal (T0), mild (hypoplastic, likely treated with orthodontics or orthognathic surgery; T1), moderate (vertically deficient ramus, likely treated with distraction osteogenesis; T2), or severe (ramus rudimentary or absent, with either adequate or inadequate mandibular body bone stock; T3 and T4, likely treated with costochondral graft or free fibular flap, respectively). The 3dCT face scans of CFM patients were randomized and then classified by craniofacial surgeons. Pairwise agreement and Fleiss' κ were used to assess interrater reliability. Results: The 3dCT images of 43 patients with CFM (aged 0.1–15.8 years) were reviewed by 15 craniofacial surgeons, representing an average 15.2 years of experience. Reviewers demonstrated fair interrater reliability with average pairwise agreement of 50.4 ± 9.9% (Fleiss' κ = 0.34). This represents significant improvement over the Pruzansky–Kaban classification (pairwise agreement, 39.2%; P = 0.0033.) Reviewers demonstrated substantial interrater reliability with average pairwise agreement of 83.0 ± 7.6% (κ = 0.64) distinguishing deformities requiring graft or flap reconstruction (T3 and T4) from others. Conclusion: The proposed classification, designed for the era of 3dCT, shows improved consensus with respect to stratifying the severity of mandibular deformity and type of operative management. PMID:27104097
Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies
NASA Technical Reports Server (NTRS)
Reyhner, T. A.
1982-01-01
An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.
Siler, Drew L; Faulds, James E; Mayhew, Brett
2013-04-16
Geothermal systems in the Great Basin, USA, are controlled by a variety of fault intersection and fault interaction areas. Understanding the specific geometry of the structures most conducive to broad-scale geothermal circulation is crucial to both the mitigation of the costs of geothermal exploration (especially drilling) and to the identification of geothermal systems that have no surface expression (blind systems). 3-dimensional geologic modeling is a tool that can elucidate the specific stratigraphic intervals and structural geometries that host geothermal reservoirs. Astor Pass, NV USA lies just beyond the northern extent of the dextral Pyramid Lake fault zone near the boundary between two distinct structural domains, the Walker Lane and the Basin and Range, and exhibits characteristics of each setting. Both northwest-striking, left-stepping dextral faults of the Walker Lane and kinematically linked northerly striking normal faults associated with the Basin and Range are present. Previous studies at Astor Pass identified a blind geothermal system controlled by the intersection of west-northwest and north-northwest striking dextral-normal faults. Wells drilled into the southwestern quadrant of the fault intersection yielded 94°C fluids, with geothermometers suggesting a maximum reservoir temperature of 130°C. A 3-dimensional model was constructed based on detailed geologic maps and cross-sections, 2-dimensional seismic data, and petrologic analysis of the cuttings from three wells in order to further constrain the structural setting. The model reveals the specific geometry of the fault interaction area at a level of detail beyond what geologic maps and cross-sections can provide.
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
Photoprotection by pistachio bioactives in a 3-dimensional human skin equivalent tissue model.
Chen, C-Y Oliver; Smith, Avi; Liu, Yuntao; Du, Peng; Blumberg, Jeffrey B; Garlick, Jonathan
2017-01-25
Reactive oxygen species (ROS) generated during ultraviolet (UV) light exposure can induce skin damage and aging. Antioxidants can provide protection against oxidative injury to skin via "quenching" ROS. Using a validated 3-dimensional (3D) human skin equivalent (HSE) tissue model that closely mimics human skin, we examined whether pistachio antioxidants could protect HSE against UVA-induced damage. Lutein and γ-tocopherol are the predominant lipophilic antioxidants in pistachios; treatment with these compounds prior to UVA exposure protected against morphological changes to the epithelial and connective tissue compartments of HSE. Pistachio antioxidants preserved overall skin thickness and organization, as well as fibroblast morphology, in HSE exposed to UVA irradiation. However, this protection was not substantiated by the analysis of the proliferation of keratinocytes and apoptosis of fibroblasts. Additional studies are warranted to elucidate the basis of these discordant results and extend research into the potential role of pistachio bioactives promoting skin health.
NASA Astrophysics Data System (ADS)
Zamora, A.; Gutierrez, A. E.; Velasco, A. A.
2014-12-01
2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Miller, S. A.
2001-12-01
We present preliminary results from a 3-dimensional fault interaction model, with the fault system specified by the geometry and tectonics of the San Andreas Fault (SAF) system. We use the forward model for earthquake generation on interacting faults of Fitzenz and Miller [2001] that incorporates the analytical solutions of Okada [85,92], GPS-constrained tectonic loading, creep compaction and frictional dilatancy [Sleep and Blanpied, 1994, Sleep, 1995], and undrained poro-elasticity. The model fault system is centered at the Big Bend, and includes three large strike-slip faults (each discretized into multiple subfaults); 1) a 300km, right-lateral segment of the SAF to the North, 2) a 200km-long left-lateral segment of the Garlock fault to the East, and 3) a 100km-long right-lateral segment of the SAF to the South. In the initial configuration, three shallow-dipping faults are also included that correspond to the thrust belt sub-parallel to the SAF. Tectonic loading is decomposed into basal shear drag parallel to the plate boundary with a 35mm yr-1 plate velocity, and East-West compression approximated by a vertical dislocation surface applied at the far-field boundary resulting in fault-normal compression rates in the model space about 4mm yr-1. Our aim is to study the long-term seismicity characteristics, tectonic evolution, and fault interaction of this system. We find that overpressured faults through creep compaction are a necessary consequence of the tectonic loading, specifically where high normal stress acts on long straight fault segments. The optimal orientation of thrust faults is a function of the strike-slip behavior, and therefore results in a complex stress state in the elastic body. This stress state is then used to generate new fault surfaces, and preliminary results of dynamically generated faults will also be presented. Our long-term aim is to target measurable properties in or around fault zones, (e.g. pore pressures, hydrofractures, seismicity
Pashazadeh, Saeid; Sharifi, Mohsen
2009-01-01
Existing 3-dimensional acoustic target tracking methods that use wired/wireless networked sensor nodes to track targets based on four sensing coverage do not always compute the feasible spatio-temporal information of target objects. To investigate this discrepancy in a formal setting, we propose a geometric model of the target tracking problem alongside its equivalent geometric dual model that is easier to solve. We then study and prove some properties of dual model by exploiting its relationship with algebra. Based on these properties, we propose a four coverage axis line method based on four sensing coverage and prove that four sensing coverage always yields two dual correct answers; usually one of them is infeasible. By showing that the feasible answer can be only sometimes identified by using a simple time test method such as the one proposed by ourselves, we prove that four sensing coverage fails to always yield the feasible spatio-temporal information of a target object. We further prove that five sensing coverage always gives the feasible position of a target object under certain conditions that are discussed in this paper. We propose three extensions to four coverage axis line method, namely, five coverage extent point method, five coverage extended axis lines method, and five coverage redundant axis lines method. Computation and time complexities of all four proposed methods are equal in the worst cases as well as on average being equal to Θ(1) each. Proposed methods and proved facts about capabilities of sensing coverage degree in this paper can be used in all other methods of acoustic target tracking like Bayesian filtering methods. PMID:22423198
A 3-Dimensional Model of Water-Bearing Sequences in the Dominguez Gap Region, Long Beach, California
Ponti, Daniel J.; Ehman, Kenneth D.; Edwards, Brian D.; Tinsley, John C.; Hildenbrand, Thomas; Hillhouse, John W.; Hanson, Randall T.; McDougall, Kristen; Powell, Charles L.; Wan, Elmira; Land, Michael; Mahan, Shannon; Sarna-Wojcicki, Andrei M.
2007-01-01
A 3-dimensional computer model of the Quaternary sequence stratigraphy in the Dominguez gap region of Long Beach, California has been developed to provide a robust chronostratigraphic framework for hydrologic and tectonic studies. The model consists of 13 layers within a 16.5 by 16.1 km (10.25 by 10 mile) square area and extends downward to an altitude of -900 meters (-2952.76 feet). Ten sequences of late Pliocene to Holocene age are identified and correlated within the model. Primary data to build the model comes from five reference core holes, extensive high-resolution seismic data obtained in San Pedro Bay, and logs from several hundred water and oil wells drilled in the region. The model is best constrained in the vicinity of the Dominguez gap seawater intrusion barrier where a dense network of subsurface data exist. The resultant stratigraphic framework and geologic structure differs significantly from what has been proposed in earlier studies. An important new discovery from this approach is the recognition of ongoing tectonic deformation throughout nearly all of Quaternary time that has impacted the geometry and character of the sequences. Anticlinal folding along a NW-SE trend, probably associated with Quaternary reactivation of the Wilmington anticline, has uplifted and thinned deposits along the fold crest, which intersects the Dominguez gap seawater barrier near Pacific Coast Highway. A W-NW trending fault system that approximately parallels the fold crest has also been identified. This fault progressively displaces all but the youngest sequences down to the north and serves as the southern termination of the classic Silverado aquifer. Uplift and erosion of fining-upward paralic sequences along the crest of the young fold has removed or thinned many of the fine-grained beds that serve to protect the underlying Silverado aquifer from seawater contaminated shallow groundwater. As a result of this process, the potential exists for vertical migration of
Tezera, Liku B; Bielecka, Magdalena K; Chancellor, Andrew; Reichmann, Michaela T; Shammari, Basim Al; Brace, Patience; Batty, Alex; Tocheva, Annie; Jogai, Sanjay; Marshall, Ben G; Tebruegge, Marc; Jayasinghe, Suwan N; Mansour, Salah; Elkington, Paul T
2017-01-01
Cell biology differs between traditional cell culture and 3-dimensional (3-D) systems, and is modulated by the extracellular matrix. Experimentation in 3-D presents challenges, especially with virulent pathogens. Mycobacterium tuberculosis (Mtb) kills more humans than any other infection and is characterised by a spatially organised immune response and extracellular matrix remodelling. We developed a 3-D system incorporating virulent mycobacteria, primary human blood mononuclear cells and collagen–alginate matrix to dissect the host-pathogen interaction. Infection in 3-D led to greater cellular survival and permitted longitudinal analysis over 21 days. Key features of human tuberculosis develop, and extracellular matrix integrity favours the host over the pathogen. We optimised multiparameter readouts to study emerging therapeutic interventions: cytokine supplementation, host-directed therapy and immunoaugmentation. Each intervention modulates the host-pathogen interaction, but has both beneficial and harmful effects. This methodology has wide applicability to investigate infectious, inflammatory and neoplastic diseases and develop novel drug regimes and vaccination approaches. DOI: http://dx.doi.org/10.7554/eLife.21283.001 PMID:28063256
In vitro 3-dimensional tumor model for radiosensitivity of HPV positive OSCC cell lines.
Zhang, Mei; Rose, Barbara; Lee, C Soon; Hong, Angela M
2015-01-01
The incidence of oropharyngeal squamous cell carcinoma (OSCC) is increasing due to the rising prevalence of human papillomavirus (HPV) positive OSCC. HPV positive OSCC is associated with better outcomes than HPV negative OSCC. Our aim was to explore the possibility that this favorable prognosis is due to the enhanced radiosensitivity of HPV positive OSCC. HPV positive OSCC cell lines were generated from the primary OSCCs of 2 patients, and corresponding HPV positive cell lines generated from nodal metastases following xenografting in nude mice. Monolayer and 3 dimensional (3D) culture techniques were used to compare the radiosensitivity of HPV positive lines with that of 2 HPV negative OSCC lines. Clonogenic and protein assays were used to measure survival post radiation. Radiation induced cell cycle changes were studied using flow cytometry. In both monolayer and 3D culture, HPV positive cells exhibited a heterogeneous appearance whereas HPV negative cells tended to be homogeneous. After irradiation, HPV positive cells had a lower survival in clonogenic assays and lower total protein levels in 3D cultures than HPV negative cells. Irradiated HPV positive cells showed a high proportion of cells in G1/S phase, increased apoptosis, an increased proliferation rate, and an inability to form 3D tumor clumps. In conclusion, HPV positive OSCC cells are more radiosensitive than HPV negative OSCC cells in vitro, supporting a more radiosensitive nature of HPV positive OSCC.
Nakajima, Kan; Yamaguchi, Tetsutaro; Maki, Koutaro
2009-09-01
We report here the case of a woman with Class III malocclusion and advanced periodontal disease who was treated with surgical orthodontic correction. Functional recovery after orthodontic treatment is often monitored by serial electromyography of the masticatory muscles, whereas 3-dimensional cone-beam computed tomography can provide detailed structural information about, for example, periodontal bone defects. However, it is unclear whether the information obtained via these methods is sufficient to determine the treatment goal. It might be useful to address this issue for patients with advanced periodontal disease because of much variability between patients in the determination of treatment goals. We used detailed information obtained by 3-dimensional cone-beam computed tomography to identify periodontal bone defects and set appropriate treatment goals for inclination of the incisors and mandibular surgery. Results for this patient included stable occlusion and improved facial esthetics. This case report illustrates the benefits of establishing treatment goals acceptable to the patient, based on precise 3-dimensional assessment of dentoalveolar bone, and by using masticatory muscle activity to monitor the stability of occlusion.
Chrysostomou, P P; Lodish, M B; Turkbey, E B; Papadakis, G Z; Stratakis, C A
2016-04-01
Primary pigmented nodular adrenocortical disease (PPNAD) is a rare type of bilateral adrenal hyperplasia leading to hypercortisolemia. Adrenal nodularity is often appreciable with computed tomography (CT); however, accurate radiologic characterization of adrenal size in PPNAD has not been studied well. We used 3-dimensional (3D) volumetric analysis to characterize and compare adrenal size in PPNAD patients, with and without Cushing's syndrome (CS). Patients diagnosed with PPNAD and their family members with known mutations in PRKAR1A were screened. CT scans were used to create 3D models of each adrenal. Criteria for biochemical diagnosis of CS included loss of diurnal variation and/or elevated midnight cortisol levels, and paradoxical increase in urinary free cortisol and/or urinary 17-hydroxysteroids after dexamethasone administration. Forty-five patients with PPNAD (24 females, 27.8±17.6 years) and 8 controls (19±3 years) were evaluated. 3D volumetric modeling of adrenal glands was performed in all. Thirty-eight patients out of 45 (84.4%) had CS. Their mean adrenal volume was 8.1 cc±4.1, 7.2 cc±4.5 (p=0.643) for non-CS, and 8.0cc±1.6 for controls. Mean values were corrected for body surface area; 4.7 cc/kg/m(2)±2.2 for CS, and 3.9 cc/kg/m(2)±1.3 for non-CS (p=0.189). Adrenal volume and midnight cortisol in both groups was positively correlated, r=0.35, p=0.03. We conclude that adrenal volume measured by 3D CT in patients with PPNAD and CS was similar to those without CS, confirming empirical CT imaging-based observations. However, the association between adrenal volume and midnight cortisol levels may be used as a marker of who among patients with PPNAD may develop CS, something that routine CT cannot do.
Hydroelectric structures studies using 3-dimensional methods
Harrell, T.R.; Jones, G.V.; Toner, C.K. )
1989-01-01
Deterioration and degradation of aged, hydroelectric project structures can significantly affect the operation and safety of a project. In many cases, hydroelectric headworks (in particular) have complicated geometrical configurations, loading patterns and hence, stress conditions. An accurate study of such structures can be performed using 3-dimensional computer models. 3-D computer models can be used for both stability evaluation and for finite element stress analysis. Computer aided engineering processes facilitate the use of 3-D methods in both pre-processing and post-processing of data. Two actual project examples are used to emphasize the authors' points.
Joubert, Pierre
2008-10-22
High-resolution infrared and Raman spectroscopies require refine spectral line shape model to account for all observed features. For instance, for gaseous mixtures of light molecules with heavy perturbers, drastic changes arise particularly in the collision regime, resulting from the inhomogeneous effects due to the radiator speed-dependence of the collisional line broadening and line shifting parameters. Following our previous work concerning the collision regime, we have developed a new line shape modelization called the Keilson and Storer 3-dimensional line shape model to lower densities, when the Doppler contribution, and the collisional confinement narrowing can be no longer neglected. The consequences for optical diagnostics, particularly for H{sub 2}-N{sub 2} mixtures with high pressure and high temperature are presented. The effects of collisional relaxation on the spectral line shapes are discussed.
Numerical model of electromagnetic scattering off a subterranean 3-dimensional dielectric
Dease, C.G.; Didwall, E.M.
1983-08-01
As part of the effort to develop On-Site Inspection (OSI) techniques for verification of compliance to a Comprehensive Test Ban Treaty (CTBT), a computer code was developed to predict the interaction of an electromagnetic (EM) wave with an underground cavity. Results from the code were used to evaluate the use of surface electromagnetic exploration techniques for detection of underground cavities or rubble-filled regions characteristic of underground nuclear explosions.
Fast time variations of supernova neutrino signals from 3-dimensional models
Lund, Tina; Wongwathanarat, Annop; Janka, Hans -Thomas; ...
2012-11-19
Here, we study supernova neutrino flux variations in the IceCube detector, using 3D models based on a simplified neutrino transport scheme. The hemispherically integrated neutrino emission shows significantly smaller variations compared with our previous study of 2D models, largely because of the reduced activity of the standing accretion shock instability in this set of 3D models which we interpret as a pessimistic extreme. For the studied cases, intrinsic flux variations up to about 100 Hz frequencies could still be detected in a supernova closer than about 2 kpc.
Visualization of the 3-dimensional flow around a model with the aid of a laser knife
NASA Technical Reports Server (NTRS)
Borovoy, V. Y.; Ivanov, V. V.; Orlov, A. A.; Kharchenko, V. N.
1984-01-01
A method for visualizing the three-dimensional flow around models of various shapes in a wind tunnel at a Mach number of 5 is described. A laser provides a planar light flux such that any plane through the model can be selectively illuminated. The shape of shock waves and separation regions is then determined by the intensity of light scattered by soot particles in the flow.
Remanent magnetization and 3-dimensional density model of the Kentucky anomaly region
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.; Myers, D. M.
1984-01-01
A three-dimensional model of the Kentucky body was developed to fit surface gravity and long wavelength aeromagnetic data. Magnetization and density parameters for the model are much like those of Mayhew et al (1982). The magnetic anomaly due to the model at satellite altitude is shown to be much too small by itself to account for the anomaly measured by Magsat. It is demonstrated that the source region for the satellite anomaly is considerably more extensive than the Kentucky body sensu stricto. The extended source region is modeled first using prismatic model sources and then using dipole array sources. Magnetization directions for the source region found by inversion of various combinations of scalar and vector data are found to be close to the main field direction, implying the lack of a strong remanent component. It is shown by simulation that in a case (such as this) where the geometry of the source is known, if a strong remanent component is present its direction is readily detectable, but by scalar data as readily as vector data.
A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.
Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H
2016-06-01
Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost.
Accretion Onto Supermassive Black Holes: Observational Signals from 3-Dimensional Disk Models
NASA Technical Reports Server (NTRS)
Bromley, Benjamin C.; Miller, Warner A.
2003-01-01
Our project was to model accretion flows onto supermassive black holes which reside in the centers of many galaxies. In this report we summarize the results which we obtained with the support of our NASA ATP grant. The scientific results associated with the grant are given in approximately chronological order. We also provide a list of references which acknowledge funding from this grant.
A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow
NASA Technical Reports Server (NTRS)
Oseguera, Rosa M.; Bowles, Roland L.
1988-01-01
A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.
3-dimensional spatially organized PEG-based hydrogels for an aortic valve co-culture model
Puperi, Daniel S.; Balaoing, Liezl R.; O’Connell, Ronan W.; West, Jennifer L.; Grande-Allen, K. Jane
2015-01-01
Physiologically relevant in vitro models are needed to study disease progression and to develop and screen potential therapeutic interventions for disease. Heart valve disease, in particular, has no early intervention or non-invasive treatment because there is a lack of understanding the cellular mechanisms which lead to disease. Here, we establish a novel, customizable synthetic hydrogel platform that can be used to study cell-cell interactions and the factors which contribute to valve disease. Spatially localized cell adhesive ligands bound in the scaffold promote cell growth and organization of valve interstitial cells and valve endothelial cells in 3D co-culture. Both cell types maintained phenotypes, homeostatic functions, and produced zonally localized extracellular matrix. This model extends the capabilities of in vitro research by providing a platform to perform direct contact co-culture with cells in their physiologically relevant spatial arrangement. PMID:26241755
Chan, T V Chow Ting; Tang, J; Younce, F
2004-01-01
This paper presents a new, yet simple and effective approach to modeling industrial Radio Frequency heating systems, using the wave equation applied in three dimensions instead of the conventional electrostatics method. The central idea is that the tank oscillatory circuit is excited using an external source. This then excites the applicator circuit which is then used to heat or dry the processed load. Good agreement was obtained between the experimental and numerical data, namely the S11-parameter, phase, and heating patterns for different sized loads and positions.
3-DIMENSIONAL Geometric Survey and Structural Modelling of the Dome of Pisa Cathedral
NASA Astrophysics Data System (ADS)
Aita, D.; Barsotti, R.; Bennati, S.; Caroti, G.; Piemonte, A.
2017-02-01
This paper aims to illustrate the preliminary results of a research project on the dome of Pisa Cathedral (Italy). The final objective of the present research is to achieve a deep understanding of the structural behaviour of the dome, through a detailed knowledge of its geometry and constituent materials, and by taking into account historical and architectural aspects as well. A reliable survey of the dome is the essential starting point for any further investigation and adequate structural modelling. Examination of the status quo on the surveys of the Cathedral dome shows that a detailed survey suitable for structural analysis is in fact lacking. For this reason, high-density and high-precision surveys have been planned, by considering that a different survey output is needed, according both to the type of structural model chosen and purposes to be achieved. Thus, both range-based (laser scanning) and image-based (3D Photogrammetry) survey methodologies have been used. This contribution introduces the first results concerning the shape of the dome derived from surveys. Furthermore, a comparison is made between such survey outputs and those available in the literature.
NASA Technical Reports Server (NTRS)
Fujii, K.
1983-01-01
A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.
Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn
2015-01-01
Objectives To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Methods Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. Results There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1982-01-01
The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.
NASA Astrophysics Data System (ADS)
Kobayashi, H.; Yang, W.; Ichii, K.
2015-12-01
Global simulation of canopy scale sun-induced chlorophyll fluorescence with a 3 dimensional radiative transfer modelHideki Kobayashi, Wei Yang, and Kazuhito IchiiDepartment of Environmental Geochemical Cycle Research, Japan Agency for Marine-Earth Science and Technology3173-25, Showa-machi, Kanazawa-ku, Yokohama, Japan.Plant canopy scale sun-induced chlorophyll fluorescence (SIF) can be observed from satellites, such as Greenhouse gases Observation Satellite (GOSAT), Orbiting Carbon Observatory-2 (OCO-2), and Global Ozone Monitoring Experiment-2 (GOME-2), using Fraunhofer lines in the near infrared spectral domain [1]. SIF is used to infer photosynthetic capacity of plant canopy [2]. However, it is not well understoond how the leaf-level SIF emission contributes to the top of canopy directional SIF because SIFs observed by the satellites use the near infrared spectral domain where the multiple scatterings among leaves are not negligible. It is necessary to quantify the fraction of emission for each satellite observation angle. Absorbed photosynthetically active radiation of sunlit leaves are 100 times higher than that of shaded leaves. Thus, contribution of sunlit and shaded leaves to canopy scale directional SIF emission should also be quantified. Here, we show the results of global simulation of SIF using a 3 dimensional radiative transfer simulation with MODIS atmospheric (aerosol optical thickness) and land (land cover and leaf area index) products and a forest landscape data sets prepared for each land cover category. The results are compared with satellite-based SIF (e.g. GOME-2) and the gross primary production empirically estimated by FLUXNET and remote sensing data.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.
1982-01-01
The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.
NASA Astrophysics Data System (ADS)
Sergienko, O.; Macayeal, D. R.
2007-12-01
With growing observational awareness of numerous ice-stream processes occurring on short time and spatial scales, e.g., sub-ice-stream lake volume changes and grounding-line sediment wedge build-up, the question of how well models based on "reduced-order" dynamics can simulate ice-stream behavior becomes paramount. Reduced-order models of ice-streams are typically 2-dimensional, and capture only the largest-magnitude terms in the stress tensor (with other terms being constrained by various assumptions). In predicting the overall magnitude and large-scale pattern of ice-stream flow, the reduced-order models appear to be adequate. Efforts underway in the Glaciological Community to create 3-dimensional models of the "full" ice-stream stress balance, which relax the assumptions associated with reduced-order models, suggest that a cost/benefit analysis should be done to determine how likely these efforts will be fruitful. To assess the overall benefits of full 3-dimensional models in relation to the simpler 2-dimensional counterparts, we present model solutions of the full Stokes equations for ice-stream flow over a variety of basal perturbations (e.g., a sticky spot, a subglacial lake, a grounding line). We also present the solutions derived from reduced 2-dimensional models, and compare the two solutions to estimate effects of simplifications and neglected terms, as well as to advise on what circumstances 3-dimensional models are preferable to 2-dimensional models.
Hoelting, Lisa; Scheinhardt, Benjamin; Bondarenko, Olesja; Schildknecht, Stefan; Kapitza, Marion; Tanavde, Vivek; Tan, Betty; Lee, Qian Yi; Mecking, Stefan; Leist, Marcel; Kadereit, Suzanne
2013-04-01
Nanoparticles (NPs) have been shown to accumulate in organs, cross the blood-brain barrier and placenta, and have the potential to elicit developmental neurotoxicity (DNT). Here, we developed a human embryonic stem cell (hESC)-derived 3-dimensional (3-D) in vitro model that allows for testing of potential developmental neurotoxicants. Early central nervous system PAX6(+) precursor cells were generated from hESCs and differentiated further within 3-D structures. The 3-D model was characterized for neural marker expression revealing robust differentiation toward neuronal precursor cells, and gene expression profiling suggested a predominantly forebrain-like development. Altered neural gene expression due to exposure to non-cytotoxic concentrations of the known developmental neurotoxicant, methylmercury, indicated that the 3-D model could detect DNT. To test for specific toxicity of NPs, chemically inert polyethylene NPs (PE-NPs) were chosen. They penetrated deep into the 3-D structures and impacted gene expression at non-cytotoxic concentrations. NOTCH pathway genes such as HES5 and NOTCH1 were reduced in expression, as well as downstream neuronal precursor genes such as NEUROD1 and ASCL1. FOXG1, a patterning marker, was also reduced. As loss of function of these genes results in severe nervous system impairments in mice, our data suggest that the 3-D hESC-derived model could be used to test for Nano-DNT.
Viel, Guido; Cecchetto, Giovanni; Manara, Renzo; Cecchetto, Attilio; Montisci, Massimo
2011-06-01
Patients affected by cranial trauma with depressed skull fractures and increased intracranial pressure generally undergo neurosurgical intervention. Because craniotomy and craniectomy remove skull fragments and generate new fracture lines, they complicate forensic examination and sometimes prevent a clear identification of skull fracture etiology. A 3-dimensional reconstruction based on preoperative computed tomography (CT) scans, giving a picture of the injuries before surgical intervention, can help the forensic examiner in identifying skull fracture origin and the means of production.We report the case of a 41-year-old-man presenting at the emergency department with a depressed skull fracture at the vertex and bilateral subdural hemorrhage. The patient underwent 2 neurosurgical interventions (craniotomy and craniectomy) but died after 40 days of hospitalization in an intensive care unit. At autopsy, the absence of various bone fragments did not allow us to establish if the skull had been stricken by a blunt object or had hit the ground with high kinetic energy. To analyze bone injuries before craniectomy, a 3-dimensional CT reconstruction based on preoperative scans was performed. A comparative analysis between autoptic and radiological data allowed us to differentiate surgical from traumatic injuries. Moreover, based on the shape and size of the depressed skull fracture (measured from the CT reformations), we inferred that the man had been stricken by a cylindric blunt object with a diameter of about 3 cm.
Aljitawi, Omar S.; Li, Dandan; Xiao, Yinghua; Zhang, Da; Ramachandran, Karthik; Stehno-Bittel, Lisa; Van Veldhuizen, Peter; Lin, Tara L.; Kambhampati, Suman; Garimella, Rama
2014-01-01
The disparate responses of leukemia cells to chemotherapy in vivo, compared to in vitro, is partly related to the interactions of leukemic cells and the 3 dimensional (3D) bone marrow stromal microenvironment. We investigated the effects of chemotherapy agents on leukemic cell lines co-cultured with human bone marrow mesenchymal stem cell (hu-BM-MSC) in 3D. Comparison was made to leukemic cells treated in suspension, or grown on a hu-BM-MSC monolayer (2D conditions). We demonstrated that leukemic cells cultured in 3D were more resistant to drug-induced apoptosis compared to cells cultured in 2D or in suspension. We also demonstrated significant differences in leukemic cell response to chemotherapy using different leukemic cell lines cultured in 3D. We suggest that the differential responses to chemotherapy in 3D may be related to the expression of N-cadherin in the co-culture system. This unique model provides an opportunity to study leukemic cell responses to chemotherapy in 3D. PMID:23566162
Stephenson, Robert S; Boyett, Mark R; Hart, George; Nikolaidou, Theodora; Cai, Xue; Corno, Antonio F; Alphonso, Nelson; Jeffery, Nathan; Jarvis, Jonathan C
2012-01-01
The general anatomy of the cardiac conduction system (CCS) has been known for 100 years, but its complex and irregular three-dimensional (3D) geometry is not so well understood. This is largely because the conducting tissue is not distinct from the surrounding tissue by dissection. The best descriptions of its anatomy come from studies based on serial sectioning of samples taken from the appropriate areas of the heart. Low X-ray attenuation has formerly ruled out micro-computed tomography (micro-CT) as a modality to resolve internal structures of soft tissue, but incorporation of iodine, which has a high molecular weight, into those tissues enhances the differential attenuation of X-rays and allows visualisation of fine detail in embryos and skeletal muscle. Here, with the use of a iodine based contrast agent (I(2)KI), we present contrast enhanced micro-CT images of cardiac tissue from rat and rabbit in which the three major subdivisions of the CCS can be differentiated from the surrounding contractile myocardium and visualised in 3D. Structures identified include the sinoatrial node (SAN) and the atrioventricular conduction axis: the penetrating bundle, His bundle, the bundle branches and the Purkinje network. Although the current findings are consistent with existing anatomical representations, the representations shown here offer superior resolution and are the first 3D representations of the CCS within a single intact mammalian heart.
NASA Astrophysics Data System (ADS)
Mansoor, K.; Maley, M. P.; Demir, Z.; Noyes, C.
2001-12-01
Lawrence Livermore National Laboratory (LLNL), which is on the Superfund National Priorities List, is implementing an extensive ground water remediation program. The environmental investigation covers an area of about 2 square miles, and is underlain by a thick sequence of heterogeneous alluvial sediments. These sediments have been subdivided into hydrostratigraphic units (HSUs) bounded by thin confining layers that were identified using a deterministic approach. LLNL currently operates a large ground water extraction system that includes 80 ground water extraction wells connected to 25 separate treatment facilities. These combined facilities treated about 308 million gallons of ground water at an average combined flow rate of 600 gpm, and removed about 270 kg of volatile organic compounds (VOC's). To better manage this large complex remediation system, a 3-dimensional, finite-element numerical model was developed using FEFLOW. The model simulated a 7 square-mile portion of the large Livermore Valley ground water basin. The quality of the input data varied from highly detailed, in the environmental investigation areas, to sparse, near some of the model domain boundaries. These different data sets had to be integrated to obtain the necessary boundary conditions and input parameters for the model. Hydraulic conductivities were averaged from measured lithologic descriptions and hydraulic test data. Boundary conditions were based on a local and regional assessment of groundwater elevation data representative of observed inflow/outflow boundaries. The model was initially calibrated to a set of 8 distinct hydrologic stress periods over 12 years. Initial flow calibration for the model was achieved using the parameter estimation tool PEST. Through successive data analysis and calibration, optimal parameters were established for each HSU and expanded to 35 hydrologic stress periods covering the entire recorded hydrologic history. VOC transport was calibrated to 9 years of
Ravikiran, S.R.; Kumar, Ashvini; Chavadi, Channabasappa; Pulastya, Sanyal
2015-01-01
Purpose To evaluate thickness, location and orientation of optic strut and anterior clinoid process and variations in paraclinoid region, solely based on multidetector computed tomography (MDCT) images with multiplanar (MPR) and 3 dimensional (3D) reconstructions, among Indian population. Materials and Methods Ninety five CT scans of head and paranasal sinuses patients were retrospectively evaluated with MPR and 3D reconstructions to assess optic strut thickness, angle and location, variations like pneumatisation, carotico-clinoid foramen and inter-clinoid osseous ridge. Results Mean optic strut thickness was 3.64mm (±0.64), optic strut angle was 42.67 (±6.16) degrees. Mean width and length of anterior clinoid process were 10.65mm (±0.79) and 11.20mm (±0.95) respectively. Optic strut attachment to sphenoid body was predominantly sulcal as in 52 cases (54.74%) and was most frequently attached to anterior 2/5th of anterior clinoid process, seen in 93 sides (48.95%). Pneumatisation of optic strut occurred in 23 sides. Carotico-clinoid foramen was observed in 42 cases (22.11%), complete foramen in 10 cases (5.26%), incomplete foramen in 24 cases (12.63%) and contact type in 8 cases (4.21%). Inter-clinoid osseous bridge was seen unilaterally in 4 cases. Conclusion The study assesses morphometric features and anatomical variations of paraclinoid region using MDCT 3D and multiplanar reconstructions in Indian population. PMID:26557589
FERRARIO, VIRGILIO F.; SFORZA, CHIARELLA; SCHMITZ, JOHANNES H.; CIUSA, VERONICA; COLOMBO, ANNA
2000-01-01
A 3-dimensional computerised system with landmark representation of the soft-tissue facial surface allows noninvasive and fast quantitative study of facial growth. The aims of the present investigation were (1) to provide reference data for selected dimensions of lips (linear distances and ratios, vermilion area, volume); (2) to quantify the relevant growth changes; and (3) to evaluate sex differences in growth patterns. The 3-dimensional coordinates of 6 soft-tissue landmarks on the lips were obtained by an optoelectronic instrument in a mixed longitudinal and cross-sectional study (2023 examinations in 1348 healthy subjects between 6 y of age and young adulthood). From the landmarks, several linear distances (mouth width, total vermilion height, total lip height, upper lip height), the vermilion height-to-mouth width ratio, some areas (vermilion of the upper lip, vermilion of the lower lip, total vermilion) and volumes (upper lip volume, lower lip volume, total lip volume) were calculated and averaged for age and sex. Male values were compared with female values by means of Student's t test. Within each age group all lip dimensions (distances, areas, volumes) were significantly larger in boys than in girls (P < 0.05), with some exceptions in the first age groups and coinciding with the earlier female growth spurt, whereas the vermilion height-to-mouth width ratio did not show a corresponding sexual dimorphism. Linear distances in girls had almost reached adult dimensions in the 13–14 y age group, while in boys a large increase was still to occur. The attainment of adult dimensions was faster in the upper than in the lower lip, especially in girls. The method used in the present investigation allowed the noninvasive evaluation of a large sample of nonpatient subjects, leading to the definition of 3-dimensional normative data. Data collected in the present study could represent a data base for the quantitative description of human lip morphology from childhood to
ABSTRACTION OF INFORMATION FROM 2- AND 3-DIMENSIONAL PORFLOW MODELS INTO A 1-D GOLDSIM MODEL - 11404
Taylor, G.; Hiergesell, R.
2010-11-16
The Savannah River National Laboratory has developed a 'hybrid' approach to Performance Assessment modeling which has been used for a number of Performance Assessments. This hybrid approach uses a multi-dimensional modeling platform (PorFlow) to develop deterministic flow fields and perform contaminant transport. The GoldSim modeling platform is used to develop the Sensitivity and Uncertainty analyses. Because these codes are performing complementary tasks, it is incumbent upon them that for the deterministic cases they produce very similar results. This paper discusses two very different waste forms, one with no engineered barriers and one with engineered barriers, each of which present different challenges to the abstraction of data. The hybrid approach to Performance Assessment modeling used at the SRNL uses a 2-D unsaturated zone (UZ) and a 3-D saturated zone (SZ) model in the PorFlow modeling platform. The UZ model consists of the waste zone and the unsaturated zoned between the waste zone and the water table. The SZ model consists of source cells beneath the waste form to the points of interest. Both models contain 'buffer' cells so that modeling domain boundaries do not adversely affect the calculation. The information pipeline between the two models is the contaminant flux. The domain contaminant flux, typically in units of moles (or Curies) per year from the UZ model is used as a boundary condition for the source cells in the SZ. The GoldSim modeling component of the hybrid approach is an integrated UZ-SZ model. The model is a 1-D representation of the SZ, typically 1-D in the UZ, but as discussed below, depending on the waste form being analyzed may contain pseudo-2-D elements. A waste form at the Savannah River Site (SRS) which has no engineered barriers is commonly referred to as a slit trench. A slit trench, as its name implies, is an unlined trench, typically 6 m deep, 6 m wide, and 200 m long. Low level waste consisting of soil, debris, rubble, wood
3-Dimensional Computational Fluid Dynamics Modeling of Solid Oxide Fuel Cell Using Different Fuels
2011-01-01
Material Operating Temperature (oC) Efficiency (%) PEMFC H2, Methanol, Formic Acid Hydrated Organic Polymer < 90 40-50 AFC Pure H2 Aqueous...major types of fuel cells in practice are listed below: Polymer Electrolyte Membrane Fuel Cell (PEMFC) Alkaline Fuel cell (AFC) Phosphoric Acid ...potassium hydroxide 60 – 250 50 PAFC Pure H2 Phosphoric Acid 180 - 210 40 MCFC H2, CH4, CH3OH Molten Alkali Carbonate 600 – 700 45-55
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able...
2012-04-20
Indian tectonic plates . Without knowing the true lateral changes in anisotropy and including large continental provinces within the model it is...also significantly increase anomaly strength while sharpening the anomaly edges to create stronger and more pronounced tectonic structures. The
Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus
2015-01-01
Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532
NASA Astrophysics Data System (ADS)
Bahlake, Ahmad; Farivar, Foad; Dabir, Bahram
2016-07-01
In this paper a 3-dimensional modeling of simultaneous stripping of carbon dioxide (CO2) and hydrogen sulfide (H2S) from water using hollow fiber membrane made of polyvinylidene fluoride is developed. The water, containing CO2 and H2S enters to the membrane as feed. At the same time, pure nitrogen flow in the shell side of a shell and tube hollow fiber as the solvent. In the previous methods of modeling hollow fiber membranes just one of the membranes was modeled and the results expand to whole shell and tube system. In this research the whole hollow fiber shell and tube module is modeled to reduce the errors. Simulation results showed that increasing the velocity of solvent flow and decreasing the velocity of the feed are leads to increase in the system yield. However the effect of the feed velocity on the process is likely more than the influence of changing the velocity of the gaseous solvent. In addition H2S stripping has higher yield in comparison with CO2 stripping. This model is compared to the previous modeling methods and shows that the new model is more accurate. Finally, the effect of feed temperature is studied using response surface method and the operating conditions of feed temperature, feed velocity, and solvent velocity is optimized according to synergistic effects. Simulation results show that, in the optimum operating conditions the removal percentage of H2S and CO2 are 27 and 21 % respectively.
Lin, Hsiu-Hsia; Chuang, Ya-Fang; Weng, Jing-Ling; Lo, Lun-Jou
2015-01-01
Background Three-dimensional computed tomographic imaging has become popular in clinical evaluation, treatment planning, surgical simulation, and outcome assessment for maxillofacial intervention. The purposes of this study were to investigate whether there is any correlation among landmark-based horizontal reference planes and to validate the reproducibility and reliability of landmark identification. Materials and Methods Preoperative and postoperative cone-beam computed tomographic images of patients who had undergone orthognathic surgery were collected. Landmark-oriented reference planes including the Frankfort horizontal plane (FHP) and the lateral semicircular canal plane (LSP) were established. Four FHPs were defined by selecting 3 points from the orbitale, porion, or midpoint of paired points. The LSP passed through both the lateral semicircular canal points and nasion. The distances between the maxillary or mandibular teeth and the reference planes were measured, and the differences between the 2 sides were calculated and compared. The precision in locating the landmarks was evaluated by performing repeated tests, and the intraobserver reproducibility and interobserver reliability were assessed. Results A total of 30 patients with facial deformity and malocclusion—10 patients with facial symmetry, 10 patients with facial asymmetry, and 10 patients with cleft lip and palate—were recruited. Comparing the differences among the 5 reference planes showed no statistically significant difference among all patient groups. Regarding intraobserver reproducibility, the mean differences in the 3 coordinates varied from 0 to 0.35 mm, with correlation coefficients between 0.96 and 1.0, showing high correlation between repeated tests. Regarding interobserver reliability, the mean differences among the 3 coordinates varied from 0 to 0.47 mm, with correlation coefficients between 0.88 and 1.0, exhibiting high correlation between the different examiners. Conclusions The
Computational Modeling of Tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)
1995-01-01
This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.
Crestanello, Juan A; Daniels, Curt; Franco, Veronica; Raman, Subha V
2010-01-01
The pre- and postoperative evaluation of anomalous pulmonary venous return usually requires multiple invasive and noninvasive tests in order to obtain complete anatomic and functional data. Conversely, in a single setting, either cardiovascular magnetic resonance imaging or cardiovascular computed tomography can sufficiently reveal this information in adult patients. Herein, we present the cases of 2 patients with partial anomalous pulmonary venous return who underwent preoperative and postoperative evaluation by either method alone, and we discuss the benefits and limitations of each technique.
NASA Technical Reports Server (NTRS)
2000-01-01
Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.
Cardiothoracic Applications of 3-dimensional Printing.
Giannopoulos, Andreas A; Steigner, Michael L; George, Elizabeth; Barile, Maria; Hunsaker, Andetta R; Rybicki, Frank J; Mitsouras, Dimitris
2016-09-01
Medical 3-dimensional (3D) printing is emerging as a clinically relevant imaging tool in directing preoperative and intraoperative planning in many surgical specialties and will therefore likely lead to interdisciplinary collaboration between engineers, radiologists, and surgeons. Data from standard imaging modalities such as computed tomography, magnetic resonance imaging, echocardiography, and rotational angiography can be used to fabricate life-sized models of human anatomy and pathology, as well as patient-specific implants and surgical guides. Cardiovascular 3D-printed models can improve diagnosis and allow for advanced preoperative planning. The majority of applications reported involve congenital heart diseases and valvular and great vessels pathologies. Printed models are suitable for planning both surgical and minimally invasive procedures. Added value has been reported toward improving outcomes, minimizing perioperative risk, and developing new procedures such as transcatheter mitral valve replacements. Similarly, thoracic surgeons are using 3D printing to assess invasion of vital structures by tumors and to assist in diagnosis and treatment of upper and lower airway diseases. Anatomic models enable surgeons to assimilate information more quickly than image review, choose the optimal surgical approach, and achieve surgery in a shorter time. Patient-specific 3D-printed implants are beginning to appear and may have significant impact on cosmetic and life-saving procedures in the future. In summary, cardiothoracic 3D printing is rapidly evolving and may be a potential game-changer for surgeons. The imager who is equipped with the tools to apply this new imaging science to cardiothoracic care is thus ideally positioned to innovate in this new emerging imaging modality.
Hui, Catherine; Pi, Yeli; Swami, Vimarsha; Mabee, Myles; Jaremko, Jacob L.
2016-01-01
Background: Anatomic single bundle anterior cruciate ligament (ACL) reconstruction is the current gold standard in ACL reconstructive surgery. However, placement of femoral and tibial tunnels at the anatomic center of the ACL insertion sites can be difficult intraoperatively. We developed a “virtual arthroscopy” program that allows users to identify ACL insertions on preoperative knee magnetic resonance images (MRIs) and generates a 3-dimensional (3D) bone model that matches the arthroscopic view to help guide intraoperative tunnel placement. Purpose: To test the validity of the ACL insertion sites identified using our 3D modeling program and to determine the accuracy of arthroscopic ACL reconstruction guided by our “virtual arthroscopic” model. Study Design: Descriptive laboratory study. Methods: Sixteen cadaveric knees were prescanned using routine MRI sequences. A trained, blinded observer then identified the center of the ACL insertions using our program. Eight knees were dissected, and the centers of the ACL footprints were marked with a screw. In the remaining 8 knees, arthroscopic ACL tunnels were drilled into the center of the ACL footprints based on landmarks identified using our virtual arthroscopic model. Postprocedural MRI was performed on all 16 knees. The 3D distance between pre- and postoperative 3D centers of the ACL were calculated by 2 trained, blinded observers and a musculoskeletal radiologist. Results: With 2 outliers removed, the postoperative femoral and tibial tunnel placements in the open specimens differed by 2.5 ± 0.9 mm and 2.9 ± 0.7 mm from preoperative centers identified on MRI. Postoperative femoral and tibial tunnel centers in the arthroscopic specimens differed by 3.2 ± 0.9 mm and 2.9 ± 0.7 mm, respectively. Conclusion: Our results show that MRI-based 3D localization of the ACL and our virtual arthroscopic modeling program is feasible and does not show a statistically significant difference to an open arthrotomy approach
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1985-01-01
The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.
2010-01-01
Background Animal models of focal cerebral ischemia are widely used in stroke research. The purpose of our study was to evaluate and compare the cerebral macro- and microvascular architecture of rats in two different models of permanent middle cerebral artery occlusion using an innovative quantitative micro- and nano-CT imaging technique. Methods 4h of middle cerebral artery occlusion was performed in rats using the macrosphere method or the suture technique. After contrast perfusion, brains were isolated and scanned en-bloc using micro-CT (8 μm)3 or nano-CT at 500 nm3 voxel size to generate 3D images of the cerebral vasculature. The arterial vascular volume fraction and gray scale attenuation was determined and the significance of differences in measurements was tested with analysis of variance [ANOVA]. Results Micro-CT provided quantitative information on vascular morphology. Micro- and nano-CT proved to visualize and differentiate vascular occlusion territories performed in both models of cerebral ischemia. The suture technique leads to a remarkable decrease in the intravascular volume fraction of the middle cerebral artery perfusion territory. Blocking the medial cerebral artery with macrospheres, the vascular volume fraction of the involved hemisphere decreased significantly (p < 0.001), independently of the number of macrospheres, and was comparable to the suture method. We established gray scale measurements by which focal cerebral ischemia could be radiographically categorized (p < 0.001). Nano-CT imaging demonstrates collateral perfusion related to different occluded vessel territories after macrosphere perfusion. Conclusion Micro- and Nano-CT imaging is feasible for analysis and differentiation of different models of focal cerebral ischemia in rats. PMID:20509884
Computer Modeling and Simulation
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
3-dimensional Oil Drift Simulations
NASA Astrophysics Data System (ADS)
Wettre, C.; Reistad, M.; Hjøllo, B.Å.
Simulation of oil drift has been an ongoing activity at the Norwegian Meteorological Institute since the 1970's. The Marine Forecasting Centre provides a 24-hour service for the Norwegian Pollution Control Authority and the oil companies operating in the Norwegian sector. The response time is 30 minutes. From 2002 the service is extended to simulation of oil drift from oil spills in deep water, using the DeepBlow model developed by SINTEF Applied Chemistry. The oil drift model can be applied both for instantaneous and continuous releases. The changes in the mass of oil and emulsion as a result of evaporation and emulsion are computed. For oil spill at deep water, hydrate formation and gas dissolution are taken into account. The properties of the oil depend on the oil type, and in the present version 64 different types of oil can be simulated. For accurate oil drift simulations it is important to have the best possible data on the atmospheric and oceanic conditions. The oil drift simulations at the Norwegian Meteorological Institute are always based on the most updated data from numerical models of the atmosphere and the ocean. The drift of the surface oil is computed from the vectorial sum of the surface current from the ocean model and the wave induced Stokes drift computed from wave energy spectra from the wave prediction model. In the new model the current distribution with depth is taken into account when calculating the drift of the dispersed oil droplets. Salinity and temperature profiles from the ocean model are needed in the DeepBlow model. The result of the oil drift simulations can be plotted on sea charts used for navigation, either as trajectory plots or particle plots showing the situation at a given time. The results can also be sent as data files to be included in the user's own GIS system.
Computer Model Documentation Guide.
ERIC Educational Resources Information Center
National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.
These guidelines for communicating effectively the details of computer model design and operation to persons with varying interests in a model recommend the development of four different types of manuals to meet the needs of managers, users, analysts and programmers. The guidelines for preparing a management summary manual suggest a broad spectrum…
NASA Astrophysics Data System (ADS)
Hofmeister, Anne M.; Criss, Robert E.
2012-03-01
The fundamental and shared rotational characteristics of the Solar System (nearly circular, co-planar orbits and mostly upright axial spins of the planets) record conditions of origin, yet are not explained by prevailing 2-dimensional disk models. Current planetary spin and orbital rotational energies (R.E.) each nearly equal and linearly depend on gravitational self-potential of formation (Ug), revealing mechanical energy conservation. We derive -ΔUg≅Δ.R.E. and stability criteria from thermodynamic principles, and parlay these relationships into a detailed model of simultaneous accretion of the protoSun and planets from the dust-bearing 3-d pre-solar nebula (PSN). Gravitational heating is insignificant because Ug is negative, the 2nd law of thermodynamics must be fulfilled, and ideal gas conditions pertain to the rarified PSN until the objects were nearly fully formed. Combined conservation of angular momentum and mechanical energy during 3-dimensional collapse of spheroidal dust shells in a contracting nebula provides ΔR.E.≅R.E. for the central body, whereas for formation of orbiting bodies, ΔR.E.≅R.E.f(1-If/Ii), where I is the moment of inertia. Orbital data for the inner planets follow 0.04×R.E.f≅-Ug which confirms conservation of angular momentum. Significant loss of spin, attributed to viscous dissipation during differential rotation, masks the initial spin of the un-ignited protoSun predicted by R.E.=-Ug. Heat production occurs after nearly final sizes are reached via mechanisms such as shear during differential rotation and radioactivity. We focus on the dilute stage, showing that the PSN was compositionally graded due to light molecules diffusing preferentially, providing the observed planetary chemistry, and set limits on PSN mass, density, and temperature. From measured planetary masses and orbital characteristics, accounting for dissipation of spin, we deduce mechanisms and the sequence of converting a 3-d dusty cloud to the present 2-d
Understanding student computational thinking with computational modeling
NASA Astrophysics Data System (ADS)
Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.
2013-01-01
Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.
NASA Technical Reports Server (NTRS)
Green, Terry J.
1988-01-01
A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2010-01-01
A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.
Computer modeling of photodegradation
NASA Technical Reports Server (NTRS)
Guillet, J.
1986-01-01
A computer program to simulate the photodegradation of materials exposed to terrestrial weathering environments is being developed. Input parameters would include the solar spectrum, the daily levels and variations of temperature and relative humidity, and materials such as EVA. A brief description of the program, its operating principles, and how it works was initially described. After that, the presentation focuses on the recent work of simulating aging in a normal, terrestrial day-night cycle. This is significant, as almost all accelerated aging schemes maintain a constant light illumination without a dark cycle, and this may be a critical factor not included in acceleration aging schemes. For outdoor aging, the computer model is indicating that the night dark cycle has a dramatic influence on the chemistry of photothermal degradation, and hints that a dark cycle may be needed in an accelerated aging scheme.
Chung, S.; McGill, M.; Preece, D.S.
1994-07-01
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
A Computational Model of Cerebral Cortex Folding
Nie, Jingxin; Guo, Lei; Li, Gang; Faraco, Carlos; Miller, L Stephen; Liu, Tianming
2010-01-01
The geometric complexity and variability of the human cerebral cortex has long intrigued the scientific community. As a result, quantitative description of cortical folding patterns and the understanding of underlying folding mechanisms have emerged as important research goals. This paper presents a computational 3-dimensional geometric model of cerebral cortex folding initialized by MRI data of a human fetal brain and deformed under the governance of a partial differential equation modeling cortical growth. By applying different simulation parameters, our model is able to generate folding convolutions and shape dynamics of the cerebral cortex. The simulations of this 3D geometric model provide computational experimental support to the following hypotheses: 1) Mechanical constraints of the skull regulate the cortical folding process. 2) The cortical folding pattern is dependent on the global cell growth rate of the whole cortex. 3) The cortical folding pattern is dependent on relative rates of cell growth in different cortical areas. 4) The cortical folding pattern is dependent on the initial geometry of the cortex. PMID:20167224
Lytton, William W.
2009-01-01
Preface Epilepsy is a complex set of disorders that can involve many areas of cortex as well as underlying deep brain systems. The myriad manifestations of seizures, as varied as déjà vu and olfactory hallucination, can thereby give researchers insights into regional functions and relations. Epilepsy is also complex genetically and pathophysiologically, involving microscopic (ion channels, synaptic proteins), macroscopic (brain trauma and rewiring) and intermediate changes in a complex interplay of causality. It has long been recognized that computer modeling will be required to disentangle causality, to better understand seizure spread and to understand and eventually predict treatment efficacy. Over the past few years, substantial progress has been made modeling epilepsy at levels ranging from the molecular to the socioeconomic. We review these efforts and connect them to the medical goals of understanding and treating this disorder. PMID:18594562
Computational Modeling Program
NASA Technical Reports Server (NTRS)
Govindan, T. R.; Davis, Robert J.
1998-01-01
An Integrated Product Team (IPT) has been formed at NASA Ames Research Center which has set objectives to investigate devices and processes suitable for meeting NASA requirements on ultrahigh performance computers, fast and low power devices, and high temperature wide bandgap materials. These devices may ultimately be sub-100nm feature-size. Processes and equipment must meet the stringent demands posed by the fabrication of such small devices. Until now, the reactors for Chemical Vapor Deposition (CVD) and plasma processes have been designed by trial and error procedures. Further, once the reactor is in place, optimum processing parameters are found through expensive and time-consuming experimentation. If reliable models are available that describe processes and the operation of the reactors, that chore would be reduced to a routine task while being a cost-effective option. The goal is to develop such a design tool, validate that tool using available data from current generation processes and reactors, and then use that tool to explore avenues for meeting NASA needs for ultrasmall device fabrication. Under the present grant, ARL/Penn State along with other IPT members has been developing models and computer code to meet IPT goals. Some of the accomplishments achieved during the first year of the grant are described in this report
Computational modelling of polymers
NASA Technical Reports Server (NTRS)
Celarier, Edward A.
1991-01-01
Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
1993-01-01
This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.
Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...
This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able ...
Kohyama, Hiroaki
2008-07-01
We construct the phase diagram of the quark-antiquark and diquark condensates at finite temperature and density in the 2+1 dimensional (3D) two flavor massless Gross-Neveu (GN) model with the 4-component quarks. In contrast to the case of the 2-component quarks, there appears the coexisting phase of the quark-antiquark and diquark condensates. This is the crucial difference between the 2-component and 4-component quark cases in the 3D GN model. The coexisting phase is also seen in the 4D Nambu Jona-Lasinio model. Then we see that the 3D GN model with the 4-component quarks bears closer resemblance to the 4D Nambu Jona-Lasinio model.
Computation models of discourse
Brady, M.; Berwick, R.C.
1983-01-01
This book presents papers on artificial intelligence and natural language. Topics considered include recognizing intentions from natural language utterances, cooperative responses from a portable natural language database query system, natural language generation as a computational problem, focusing in the comprehension of definite anaphora, and factors in forming discourse-dependent descriptions.
Computational Models for Neuromuscular Function
Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.
2011-01-01
Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779
Kobayashi, Kazuyoshi; Imagama, Shiro; Muramoto, Akio; Ito, Zenya; Ando, Kei; Yagi, Hideki; Hida, Tetsuro; Ito, Kenyu; Ishikawa, Yoshimoto; Tsushima, Mikito; Ishiguro, Naoki
2015-01-01
ABSTRACT In severe spinal deformity, pain and neurological disorder may be caused by spinal cord compression. Surgery for spinal reconstruction is desirable, but may be difficult in a case with severe deformity. Here, we show the utility of a 3D NaCl (salt) model in preoperative planning of anterior reconstruction using a rib strut in a 49-year-old male patient with cervicothoracic degenerative spondylosis. We performed surgery in two stages: a posterior approach with decompression and posterior instrumentation with a pedicle screw; followed by a second operation using an anterior approach, for which we created a 3D NaCl model including the cervicothoracic lesion, spinal deformity, and ribs for anterior reconstruction. The 3D NaCl model was easily scraped compared with a conventional plaster model and was useful for planning of resection and identification of a suitable rib for grafting in a preoperative simulation. Surgery was performed successfully with reference to the 3D NaCl model. We conclude that preoperative simulation with a 3D NaCl model contributes to performance of anterior reconstruction using a rib strut in a case of cervicothoracic deformity. PMID:26412901
Feher, Victoria A; Randall, Arlo; Baldi, Pierre; Bush, Robin M; de la Maza, Luis M; Amaro, Rommie E
2013-01-01
Chlamydia trachomatis is the most prevalent cause of bacterial sexually transmitted diseases and the leading cause of preventable blindness worldwide. Global control of Chlamydia will best be achieved with a vaccine, a primary target for which is the major outer membrane protein, MOMP, which comprises ~60% of the outer membrane protein mass of this bacterium. In the absence of experimental structural information on MOMP, three previously published topology models presumed a16-stranded barrel architecture. Here, we use the latest β-barrel prediction algorithms, previous 2D topology modeling results, and comparative modeling methodology to build a 3D model based on the 16-stranded, trimeric assumption. We find that while a 3D MOMP model captures many structural hallmarks of a trimeric 16-stranded β-barrel porin, and is consistent with most of the experimental evidence for MOMP, MOMP residues 320-334 cannot be modeled as β-strands that span the entire membrane, as is consistently observed in published 16-stranded β-barrel crystal structures. Given the ambiguous results for β-strand delineation found in this study, recent publications of membrane β-barrel structures breaking with the canonical rule for an even number of β-strands, findings of β-barrels with strand-exchanged oligomeric conformations, and alternate folds dependent upon the lifecycle of the bacterium, we suggest that although the MOMP porin structure incorporates canonical 16-stranded conformations, it may have novel oligomeric or dynamic structural changes accounting for the discrepancies observed.
Computer-Aided Geometry Modeling
NASA Technical Reports Server (NTRS)
Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)
1984-01-01
Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.
NASA Astrophysics Data System (ADS)
Bianco, Carlo; Tosco, Tiziana; Sethi, Rajandrea
2016-10-01
Engineered nanoparticles (NPs) in the environment can act both as contaminants, when they are unintentionally released, and as remediation agents when injected on purpose at contaminated sites. In this work two carbon-based NPs are considered, namely CARBO-IRON®, a new material developed for contaminated site remediation, and single layer graphene oxide (SLGO), a potential contaminant of the next future. Understanding and modeling the transport and deposition of such NPs in aquifer systems is a key aspect in both cases, and numerical models capable to simulate NP transport in groundwater in complex 3D scenarios are necessary. To this aim, this work proposes a modeling approach based on modified advection-dispersion-deposition equations accounting for the coupled influence of flow velocity and ionic strength on particle transport. A new modeling tool (MNM3D - Micro and Nanoparticle transport Model in 3D geometries) is presented for the simulation of NPs injection and transport in 3D scenarios. MNM3D is the result of the integration of the numerical code MNMs (Micro and Nanoparticle transport, filtration and clogging Model - Suite) in the well-known transport model RT3D (Clement et al., 1998). The injection in field-like conditions of CARBO-IRON® (20 g/l) amended by CMC (4 g/l) in a 2D vertical tank (0.7 × 1.0 × 0.12 m) was simulated using MNM3D, and compared to experimental results under the same conditions. Column transport tests of SLGO at a concentration (10 mg/l) representative of a possible spill of SLGO-containing waste water were performed at different values of ionic strength (0.1 to 35 mM), evidencing a strong dependence of SLGO transport on IS, and a reversible blocking deposition. The experimental data were fitted using the numerical code MNMs and the ionic strength-dependent transport was up-scaled for a full scale 3D simulation of SLGO release and long-term transport in a heterogeneous aquifer. MNM3D showed to potentially represent a valid tool for
Drijkoningen, Tessa; Knoter, Robert; Coerkamp, Emile G.; Koning, Anton H.J.; Rhemrev, Steven J.; Beeres, Frank J.
2016-01-01
Background: The I-Space is a radiological imaging system in which Computed Tomography (CT)-scans can be evaluated as a three dimensional hologram. The aim of this study is to analyze the value of virtual reality (I-Space) in diagnosing acute occult scaphoid fractures. Methods: A convenient cohort of 24 patients with a CT-scan from prior studies, without a scaphoid fracture on radiograph, yet high clinical suspicion of a fracture, were included in this study. CT-scans were evaluated in the I-Space by 7 observers of which 3 observers assessed the scans in the I-Space twice. The observers in this study assessed in the I-Space whether the patient had a scaphoid fracture. The kappa value was calculated for inter- and intra-observer agreement. Results: The Kappa value varied from 0.11 to 0.33 for the first assessment. For the three observers who assessed the CT-scans twice; observer 1 improved from a kappa of 0.33 to 0.50 (95% CI 0.26-0.74, P=0.01), observer 2 from 0.17 to 0.78 (95% CI 0.36-1.0, P<0.001), and observer 3 from 0.11 to 0.24 (95% CI 0.0-0.77, P=0.24). Conclusion: Following our findings the I-Space has a fast learning curve and has a potential place in the diagnostic modalities for suspected scaphoid fractures. PMID:27847847
Computational models of syntactic acquisition.
Yang, Charles
2012-03-01
The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.
Hwang, Minki; Song, Jun-Seop; Lee, Young-Seon; Joung, Boyoung; Pak, Hui-Nam
2016-01-01
Background We previously reported that stable rotors were observed in in-silico human atrial fibrillation (AF) models, and were well represented by dominant frequency (DF). We explored the spatiotemporal stability of DF sites in 3D-AF models imported from patient CT images of the left atrium (LA). Methods We integrated 3-D CT images of the LA obtained from ten patients with persistent AF (male 80%, 61.8 ± 13.5 years old) into an in-silico AF model. After induction, we obtained 6 seconds of AF simulation data for DF analyses in 30 second intervals (T1–T9). The LA was divided into ten sections. Spatiotemporal changes and variations in the temporal consistency of DF were evaluated at each section of the LA. The high DF area was defined as the area with the highest 10% DF. Results 1. There was no spatial consistency in the high DF distribution at each LA section during T1–T9 except in one patient (p = 0.027). 2. Coefficients of variation for the high DF area were highly different among the ten LA sections (p < 0.001), and they were significantly higher in the four pulmonary vein (PV) areas, the LA appendage, and the peri-mitral area than in the other LA sections (p < 0.001). 3. When we conducted virtual ablation of 10%, 15%, and 20% of the highest DF areas (n = 270 cases), AF was changed to atrial tachycardia (AT) or terminated at a rate of 40%, 57%, and 76%, respectively. Conclusions Spatiotemporal consistency of the DF area was observed in 10% of AF patients, and high DF areas were temporally variable. Virtual ablation of DF is moderately effective in AF termination and AF changing into AT. PMID:27459377
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Optimization of 3-dimensional imaging of the breast region with 3-dimensional laser scanners.
Kovacs, Laszlo; Yassouridis, Alexander; Zimmermann, Alexander; Brockmann, Gernot; Wöhnl, Antonia; Blaschke, Matthias; Eder, Maximilian; Schwenzer-Zimmerer, Katja; Rosenberg, Robert; Papadopulos, Nikolaos A; Biemer, Edgar
2006-03-01
The anatomic conditions of the female breast require imaging the breast region 3-dimensionally in a normal standing position for quality assurance and for surgery planning or surgery simulation. The goal of this work was to optimize the imaging technology for the mammary region with a 3-dimensional (3D) laser scanner, to evaluate the precision and accuracy of the method, and to allow optimum data reproducibility. Avoiding the influence of biotic factors, such as mobility, we tested the most favorable imaging technology on dummy models for scanner-related factors such as the scanner position in comparison with the torso and the number of scanners and single shots. The influence of different factors of the breast region, such as different breast shapes or premarking of anatomic landmarks, was also first investigated on dummies. The findings from the dummy models were then compared with investigations on test persons, and the accuracy of measurements on the virtual models was compared with a coincidence analysis of the manually measured values. The best precision and accuracy of breast region measurements were achieved when landmarks were marked before taking the shots and when shots at 30 degrees left and 30 degrees right, relative to the sagittal line, were taken with 2 connected scanners mounted with a +10-degree upward angle. However, the precision of the measurements on test persons was significantly lower than those measured on dummies. Our findings show that the correct settings for 3D imaging of the breast region with a laser scanner can achieve an acceptable degree of accuracy and reproducibility.
3-dimensional imaging at nanometer resolutions
Werner, James H.; Goodwin, Peter M.; Shreve, Andrew P.
2010-03-09
An apparatus and method for enabling precise, 3-dimensional, photoactivation localization microscopy (PALM) using selective, two-photon activation of fluorophores in a single z-slice of a sample in cooperation with time-gated imaging for reducing the background radiation from other image planes to levels suitable for single-molecule detection and spatial location, are described.
Component Breakout Computer Model
1987-04-29
Weapon Systems: A Policy Analysis." The Rand Graduate Institute. November 1983. Boger . D. "Statistical Models for Estimating Overhead Costs." M. S...SQUARE SCREEN PROGRAM BO DLS 70 LOCATE 3,5 100 PRINT " I I I I I I I I I I I I I I t I I I t I I i iiitiii I I I I i t I i 110 LOCATE 4,5 I 20...GOTO 4620 4610 REM ***********«««*«««**#«***********#******»,*###!^5|[^,„<c#,5|c„ dl -r C^M EED SUPPORT .c.50 REM A6(6)...N0 OF EMPLOYEES 4660 IF
Computational modeling of properties
NASA Technical Reports Server (NTRS)
Franz, Judy R.
1994-01-01
A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.
Efficient Computational Model of Hysteresis
NASA Technical Reports Server (NTRS)
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Ch. 33 Modeling: Computational Thermodynamics
Besmann, Theodore M
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Computational Modeling of Multiphase Reactors.
Joshi, J B; Nandakumar, K
2015-01-01
Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling.
Computational models of adult neurogenesis
NASA Astrophysics Data System (ADS)
Cecchi, Guillermo A.; Magnasco, Marcelo O.
2005-10-01
Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.
Computational Modeling Method for Superalloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John
1997-01-01
Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.
The 3-dimensional cellular automata for HIV infection
NASA Astrophysics Data System (ADS)
Mo, Youbin; Ren, Bin; Yang, Wencao; Shuai, Jianwei
2014-04-01
The HIV infection dynamics is discussed in detail with a 3-dimensional cellular automata model in this paper. The model can reproduce the three-phase development, i.e., the acute period, the asymptotic period and the AIDS period, observed in the HIV-infected patients in a clinic. We show that the 3D HIV model performs a better robustness on the model parameters than the 2D cellular automata. Furthermore, we reveal that the occurrence of a perpetual source to successively generate infectious waves to spread to the whole system drives the model from the asymptotic state to the AIDS state.
3-dimensional fabrication of soft energy harvesters
NASA Astrophysics Data System (ADS)
McKay, Thomas; Walters, Peter; Rossiter, Jonathan; O'Brien, Benjamin; Anderson, Iain
2013-04-01
Dielectric elastomer generators (DEG) provide an opportunity to harvest energy from low frequency and aperiodic sources. Because DEG are soft, deformable, high energy density generators, they can be coupled to complex structures such as the human body to harvest excess mechanical energy. However, DEG are typically constrained by a rigid frame and manufactured in a simple planar structure. This planar arrangement is unlikely to be optimal for harvesting from compliant and/or complex structures. In this paper we present a soft generator which is fabricated into a 3 Dimensional geometry. This capability will enable the 3-dimensional structure of a dielectric elastomer to be customised to the energy source, allowing efficient and/or non-invasive coupling. This paper demonstrates our first 3 dimensional generator which includes a diaphragm with a soft elastomer frame. When the generator was connected to a self-priming circuit and cyclically inflated, energy was accumulated in the system, demonstrated by an increased voltage. Our 3D generator promises a bright future for dielectric elastomers that will be customised for integration with complex and soft structures. In addition to customisable geometries, the 3D printing process may lend itself to fabricating large arrays of small generator units and for fabricating truly soft generators with excellent impedance matching to biological tissue. Thus comfortable, wearable energy harvesters are one step closer to reality.
Wetting characteristics of 3-dimensional nanostructured fractal surfaces
NASA Astrophysics Data System (ADS)
Davis, Ethan; Liu, Ying; Jiang, Lijia; Lu, Yongfeng; Ndao, Sidy
2017-01-01
This article reports the fabrication and wetting characteristics of 3-dimensional nanostructured fractal surfaces (3DNFS). Three distinct 3DNFS surfaces, namely cubic, Romanesco broccoli, and sphereflake were fabricated using two-photon direct laser writing. Contact angle measurements were performed on the multiscale fractal surfaces to characterize their wetting properties. Average contact angles ranged from 66.8° for the smooth control surface to 0° for one of the fractal surfaces. The change in wetting behavior was attributed to modification of the interfacial surface properties due to the inclusion of 3-dimensional hierarchical fractal nanostructures. However, this behavior does not exactly obey existing surface wetting models in the literature. Potential applications for these types of surfaces in physical and biological sciences are also discussed.
Computational modelling approaches to vaccinology.
Pappalardo, Francesco; Flower, Darren; Russo, Giulia; Pennisi, Marzio; Motta, Santo
2015-02-01
Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.
Neurometric Modeling: Computational Modeling of Individual Brains
2011-05-16
Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Neural networks, computational neuroscience, fMRI ...obtained using functional MRI. Algorithmic processing of these measurements can exploit a variety of statistical machine learning methods to... statistical machine learning methods to synthesize a new kind of neuro-cognitive model, which we call neurometric models. These executable models could be
Computational Modeling for Bedside Application
Kerckhoffs, Roy C.P.; Narayan, Sanjiv M.; Omens, Jeffrey H.; Mulligan, Lawrence J.; McCulloch, Andrew D.
2008-01-01
With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988
Computational Model for Cell Morphodynamics
NASA Astrophysics Data System (ADS)
Shao, Danying; Rappel, Wouter-Jan; Levine, Herbert
2010-09-01
We develop a computational model, based on the phase-field method, for cell morphodynamics and apply it to fish keratocytes. Our model incorporates the membrane bending force and the surface tension and enforces a constant area. Furthermore, it implements a cross-linked actin filament field and an actin bundle field that are responsible for the protrusion and retraction forces, respectively. We show that our model predicts steady state cell shapes with a wide range of aspect ratios, depending on system parameters. Furthermore, we find that the dependence of the cell speed on this aspect ratio matches experimentally observed data.
Visualizing ultrasound through computational modeling
NASA Technical Reports Server (NTRS)
Guo, Theresa W.
2004-01-01
The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.
Parallel computing in enterprise modeling.
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Cosmic logic: a computational model
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
Minimal Models of Multidimensional Computations
Fitzgerald, Jeffrey D.; Sincich, Lawrence C.; Sharpee, Tatyana O.
2011-01-01
The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs. PMID:21455284
Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)
1994-01-01
The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.
MODEL IDENTIFICATION AND COMPUTER ALGEBRA.
Bollen, Kenneth A; Bauldry, Shawn
2010-10-07
Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.
Computing Models for FPGA-Based Accelerators
Herbordt, Martin C.; Gu, Yongfeng; VanCourt, Tom; Model, Josh; Sukhwani, Bharat; Chiu, Matt
2011-01-01
Field-programmable gate arrays are widely considered as accelerators for compute-intensive applications. A critical phase of FPGA application development is finding and mapping to the appropriate computing model. FPGA computing enables models with highly flexible fine-grained parallelism and associative operations such as broadcast and collective response. Several case studies demonstrate the effectiveness of using these computing models in developing FPGA applications for molecular modeling. PMID:21603152
Control of Grasp and Manipulation by Soft Fingers with 3-Dimensional Deformation
NASA Astrophysics Data System (ADS)
Nakashima, Akira; Shibata, Takeshi; Hayakawa, Yoshikazu
In this paper, we consider control of grasp and manipulation of an object in a 3-dimensional space by a 3-fingered hand robot with soft finger tips. We firstly propose a 3-dimensional deformation model of a hemispherical soft finger tip and verify its relevance by experimental data. Second, we consider the contact kinematics and derive the dynamical equations of the fingers and the object where the 3-dimensional deformation is considered. For the system, we thirdly propose a method to regulate the object and the internal force with the information of the hand, the object and the deformation. A simulation result is presented to show the effectiveness of the control method.
Computational modeling of epithelial tissues.
Smallwood, Rod
2009-01-01
There is an extensive literature on the computational modeling of epithelial tissues at all levels from subcellular to whole tissue. This review concentrates on behavior at the individual cell to whole tissue level, and particularly on organizational aspects, and provides an indication of where information from other areas, such as the modeling of angiogenesis, is relevant. The skin, and the lining of all of the body cavities (lung, gut, cervix, bladder etc) are epithelial tissues, which in a topological sense are the boundary between inside and outside the body. They are thin sheets of cells (usually of the order of 0.5 mm thick) without extracellular matrix, have a relatively simple structure, and contain few types of cells. They have important barrier, secretory and transport functions, which are essential for the maintenance of life, so homeostasis and wound healing are important aspects of the behavior of epithelial tissues. Carcinomas originate in epithelial tissues.There are essentially two approaches to modeling tissues--to start at the level of the tissue (i.e., a length scale of the order of 1 mm) and develop generalized equations for behavior (a continuum approach); or to start at the level of the cell (i.e., a length scale of the order of 10 µm) and develop tissue behavior as an emergent property of cellular behavior (an individual-based approach). As will be seen, these are not mutually exclusive approaches, and they come in a variety of flavors.
Computational model for chromosomal instabilty
NASA Astrophysics Data System (ADS)
Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina
2015-03-01
Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.
Scientific visualization of 3-dimensional optimized stellarator configurations
Spong, D.A.
1998-01-01
The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a variety of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.
Computational modeling of membrane proteins
Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.
2014-01-01
The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688
Cupola Furnace Computer Process Model
Seymour Katz
2004-12-31
The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).
Disciplines, models, and computers: the path to computational quantum chemistry.
Lenhard, Johannes
2014-12-01
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.
Automated feature extraction for 3-dimensional point clouds
NASA Astrophysics Data System (ADS)
Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.
2016-05-01
Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.
NASA Technical Reports Server (NTRS)
Zhang, Ming
2005-01-01
The primary goal of this project was to perform theoretical calculations of propagation of cosmic rays and energetic particles in 3-dimensional heliospheric magnetic fields. We used Markov stochastic process simulation to achieve to this goal. We developed computation software that can be used to study particle propagation in, as two examples of heliospheric magnetic fields that have to be treated in 3 dimensions, a heliospheric magnetic field suggested by Fisk (1996) and a global heliosphere including the region beyond the termination shock. The results from our model calculations were compared with particle measurements from Ulysses, Earth-based spacecraft such as IMP-8, WIND and ACE, Voyagers and Pioneers in outer heliosphere for tests of the magnetic field models. We particularly looked for features of particle variations that can allow us to significantly distinguish the Fisk magnetic field from the conventional Parker spiral field. The computer code will eventually lead to a new generation of integrated software for solving complicated problems of particle acceleration, propagation and modulation in realistic 3-dimensional heliosphere of realistic magnetic fields and the solar wind with a single computation approach.
Computer Modeling of a Fusion Plasma
Cohen, B I
2000-12-15
Progress in the study of plasma physics and controlled fusion has been profoundly influenced by dramatic increases in computing capability. Computational plasma physics has become an equal partner with experiment and traditional theory. This presentation illustrates some of the progress in computer modeling of plasma physics and controlled fusion.
Reliability models for dataflow computer systems
NASA Technical Reports Server (NTRS)
Kavi, K. M.; Buckles, B. P.
1985-01-01
The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.
Predictive Models and Computational Toxicology
Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...
Connectionist Models for Intelligent Computation.
1988-08-31
Studies and Department of Physics and Astronomy and Institute for Advanced Computer Studies TInivpr-%tv of Maryland College Park, MD 20742 ABSTRACT A...distributed in the network. II. TRAINING OF THE NETWORK The stereo vision is achieved by detecting the binocular disparity of the two images observed by...SUN, Y.C. LEE and H.H. CHEN oli toSios, d Department of Physics and Astronomy SO and ent tio: Institute for Advanced Computer Studies inhowve
Applications of computer modeling to fusion research
Dawson, J.M.
1989-01-01
Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
Leverage points in a computer model
NASA Astrophysics Data System (ADS)
Janošek, Michal
2016-06-01
This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.
Chaotic Advection in a Bounded 3-Dimensional Potential Flow
NASA Astrophysics Data System (ADS)
Metcalfe, Guy; Smith, Lachlan; Lester, Daniel
2012-11-01
3-dimensional potential, or Darcy flows, are central to understanding and designing laminar transport in porous media; however, chaotic advection in 3-dimensional, volume-preserving flows is still not well understood. We show results of advecting passive scalars in a transient 3-dimensional potential flow that consists of a steady dipole flow and periodic reorientation. Even for the most symmetric reorientation protocol, neither of the two invarients of the motion are conserved; however, one invarient is closely shadowed by a surface of revolution constructed from particle paths of the steady flow, creating in practice an adiabatic surface. A consequence is that chaotic regions cover 3-dimensional space, though tubular regular regions are still transport barriers. This appears to be a new mechanism generating 3-dimensional chaotic orbits. These results contast with the experimental and theoretical results for chaotic scalar transport in 2-dimensional Darcy flows. Wiggins, J. Fluid Mech. 654 (2010).
Model Railroading and Computer Fundamentals
ERIC Educational Resources Information Center
McCormick, John W.
2007-01-01
Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…
Computational modeling of peripheral pain: a commentary.
Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S
2015-06-11
This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.
Vaquerizo, Beatriz; Theriault-Lauzier, Pascal; Piazza, Nicolo
2015-12-01
Mitral regurgitation is the most prevalent valvular heart disease worldwide. Despite the widespread availability of curative surgical intervention, a considerable proportion of patients with severe mitral regurgitation are not referred for treatment, largely due to the presence of left ventricular dysfunction, advanced age, and comorbid illnesses. Transcatheter mitral valve replacement is a promising therapeutic alternative to traditional surgical valve replacement. The complex anatomical and pathophysiological nature of the mitral valvular complex, however, presents significant challenges to the successful design and implementation of novel transcatheter mitral replacement devices. Patient-specific 3-dimensional computer-based models enable accurate assessment of the mitral valve anatomy and preprocedural simulations for transcatheter therapies. Such information may help refine the design features of novel transcatheter mitral devices and enhance procedural planning. Herein, we describe a novel medical image-based processing tool that facilitates accurate, noninvasive assessment of the mitral valvular complex, by creating precise three-dimensional heart models. The 3-dimensional computer reconstructions are then converted to a physical model using 3-dimensional printing technology, thereby enabling patient-specific assessment of the interaction between device and patient. It may provide new opportunities for a better understanding of the mitral anatomy-pathophysiology-device interaction, which is of critical importance for the advancement of transcatheter mitral valve replacement.
Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...
Geometric Modeling for Computer Vision
1974-10-01
Vision and Artificial Intellegence could lead to robots, androids and cyborgs which will be able to see, to think and to feel conscious 10.4...the construction of computer representations of physical objects, cameras, images and light for the sake of simulating their behavior. In Artificial ...specifically, I wish to exclude the connotation that the theory is a natural theory of vision. Perhaps there can be such a thing as an artificial theory
Computational Model for Armor Penetration
1987-10-01
the penetration calculation with a slide line in the target, the impact velocity was artificially raised to avoid impact of the projectile sides onto...Lagrangian equations governing motion of a continuous medium. The solution technique is called the method of artificial viscosity because of the...fronts, although no discontinuities occur in the computed flow field. With this artificial viscosity method, the equations of continuous flow can be
NASA Astrophysics Data System (ADS)
Joosten, A.; Bochud, F.; Moeckli, R.
2014-08-01
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable
3DIVS: 3-Dimensional Immersive Virtual Sculpting
Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E
2001-10-03
Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.
Ranked retrieval of Computational Biology models
2010-01-01
Background The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Results Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. Conclusions The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models. PMID:20701772
Computational Model for Corneal Transplantation
NASA Astrophysics Data System (ADS)
Cabrera, Delia
2003-10-01
We evaluated the refractive consequences of corneal transplants using a biomechanical model with homogeneous and inhomogeneous Young's modulus distributions within the cornea, taking into account ablation of some stromal tissue. A FEM model was used to simulate corneal transplants in diseased cornea. The diseased cornea was modeled as an axisymmetric structure taking into account a nonlinearly elastic, isotropic formulation. The model simulating the penetrating keratoplasty procedure gives more change in the postoperative corneal curvature when compared to the models simulating the anterior and posterior lamellar graft procedures. When a lenticle shaped tissue was ablated in the graft during the anterior and posterior keratoplasty, the models provided an additional correction of about -3.85 and -4.45 diopters, respectively. Despite the controversy around the corneal thinning disorders treatment with volume removal procedures, results indicate that significant changes in corneal refractive power could be introduced by a corneal transplantation combined with myopic laser ablation.
Computational Model Optimization for Enzyme Design Applications
2007-11-02
naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.
Computer modeling of human decision making
NASA Technical Reports Server (NTRS)
Gevarter, William B.
1991-01-01
Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.
A new epidemic model of computer viruses
NASA Astrophysics Data System (ADS)
Yang, Lu-Xing; Yang, Xiaofan
2014-06-01
This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.
Applications of Computational Modeling in Cardiac Surgery
Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.
2014-01-01
Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Enhanced absorption cycle computer model
NASA Astrophysics Data System (ADS)
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.
Computer Model Locates Environmental Hazards
NASA Technical Reports Server (NTRS)
2008-01-01
Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.
Computational Viscoplasticity Based on Overstress (CVBO) Model
NASA Astrophysics Data System (ADS)
Yuan, Zheng; Ruggles-wrenn, Marina; Fish, Jacob
2014-03-01
This article presents an efficient computational viscoplasticity based on an overstress (CVBO) model, including three-dimensional formulation, implicit stress update procedures, consistent tangent, and systematic calibration of the model parameters to experimental data. The model has been validated for PMR 15 neat resin, including temperature and aging dependence.
Computer Modeling of Liquid Crystals
NASA Astrophysics Data System (ADS)
Hashim, Rauzah
This chapter outlines the methodologies and models which are commonly used in the simulation of liquid crystals. The approach in the simulation of liquid crystals has always been to understand the nature of the phase and to relate this to fundamental molecular features such as geometry and intermolecular forces, before important properties related to certain applications are elucidated. Hence, preceding the description of the main "molecular-based" models for liquid crystals, a general but brief outline of the nature of liquid crystals and their historical development is given. Three main model classes, namely the coarse-grained single-site lattice and Gay-Berne models and the full atomistic model will be described here where for each a brief review will be given followed by assessment of its application in describing the phase phenomena with an emphasis on understanding the molecular organization in liquid crystal phases and the prediction of their bulk properties. Variants and hybrid models derived from these classes and their applications are given.
ESPC Computational Efficiency of Earth System Models
2014-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Computational Efficiency of Earth System Models...00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE ESPC Computational Efficiency of Earth System Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...optimization in this system. 3 Figure 1 – Plot showing seconds per forecast day wallclock time for a T639L64 (~21 km at the equator) NAVGEM
Comprehensive silicon solar-cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
A comprehensive silicon solar cell computer modeling scheme was developed to perform the following tasks: (1) model and analysis of the net charge distribution in quasineutral regions; (2) experimentally determined temperature behavior of Spire Corp. n+pp+ solar cells where n+-emitter is formed by ion implantation of 75As or 31P; and (3) initial validation results of computer simulation program using Spire Corp. n+pp+ cells.
Parallel computing in atmospheric chemistry models
Rotman, D.
1996-02-01
Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.
A Computational Framework for Realistic Retina Modeling.
Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco
2016-11-01
Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.
Computer Modeling of Direct Metal Laser Sintering
NASA Technical Reports Server (NTRS)
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Computational study of lattice models
NASA Astrophysics Data System (ADS)
Zujev, Aleksander
This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.
Invasive 3-Dimensional Organotypic Neoplasia from Multiple Normal Human Epithelia
Ridky, Todd W.; Chow, Jennifer M.; Wong, David J.; Khavari, Paul A.
2013-01-01
Refined cancer models are required to assess the burgeoning number of potential targets for cancer therapeutics within a rapid and clinically relevant context. Here we utilize tumor-associated genetic pathways to transform primary human epithelial cells from epidermis, oropharynx, esophagus, and cervix into genetically defined tumors within a human 3-dimensional (3-D) tissue environment incorporating cell-populated stroma and intact basement membrane. These engineered organotypic tissues recapitulated natural features of tumor progression, including epithelial invasion through basement membrane, a complex process critically required for biologic malignancy in 90% of human cancers. Invasion was rapid, and potentiated by stromal cells. Oncogenic signals in 3-D tissue, but not 2-D culture, resembled gene expression profiles from spontaneous human cancers. Screening well-characterized signaling pathway inhibitors in 3-D organotypic neoplasia helped distil a clinically faithful cancer gene signature. Multi-tissue 3-D human tissue cancer models may provide an efficient and relevant complement to current approaches to characterize cancer progression. PMID:21102459
Computational modeling of peptide-aptamer binding.
Rhinehardt, Kristen L; Mohan, Ram V; Srinivas, Goundla
2015-01-01
Evolution is the progressive process that holds each living creature in its grasp. From strands of DNA evolution shapes life with response to our ever-changing environment and time. It is the continued study of this most primitive process that has led to the advancement of modern biology. The success and failure in the reading, processing, replication, and expression of genetic code and its resulting biomolecules keep the delicate balance of life. Investigations into these fundamental processes continue to make headlines as science continues to explore smaller scale interactions with increasing complexity. New applications and advanced understanding of DNA, RNA, peptides, and proteins are pushing technology and science forward and together. Today the addition of computers and advances in science has led to the fields of computational biology and chemistry. Through these computational advances it is now possible not only to quantify the end results but also visualize, analyze, and fully understand mechanisms by gaining deeper insights. The biomolecular motion that exists governing the physical and chemical phenomena can now be analyzed with the advent of computational modeling. Ever-increasing computational power combined with efficient algorithms and components are further expanding the fidelity and scope of such modeling and simulations. This chapter discusses computational methods that apply biological processes, in particular computational modeling of peptide-aptamer binding.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone
Evaluation and Comparison of Computational Models
Myung, Jay; Tang, Yun; Pitt, Mark A.
2009-01-01
Computational models are powerful tools that can enhance the understanding of scientific phenomena. The enterprise of modeling is most productive when the reasons underlying the adequacy of a model, and possibly its superiority to other models, are understood. This chapter begins with an overview of the main criteria that must be considered in model evaluation and selection, in particular explaining why generalizability is the preferred criterion for model selection. This is followed by a review of measures of generalizability. The final section demonstrates the use of five versatile and easy-to-use selection methods for choosing between two mathematical models of protein folding. PMID:19216931
Mechanistic models in computational social science
NASA Astrophysics Data System (ADS)
Holme, Petter; Liljeros, Fredrik
2015-09-01
Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes—to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emergent phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from the natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.
A computational model of the cerebellum
Travis, B.J.
1990-01-01
The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.
A Computational Model of Selection by Consequences
ERIC Educational Resources Information Center
McDowell, J. J.
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…
Modeling User Behavior in Computer Learning Tasks.
ERIC Educational Resources Information Center
Mantei, Marilyn M.
Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…
Computer Modeling and Visualization in Design Technology: An Instructional Model.
ERIC Educational Resources Information Center
Guidera, Stan
2002-01-01
Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…
Do's and Don'ts of Computer Models for Planning
ERIC Educational Resources Information Center
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
EWE: A computer model for ultrasonic inspection
NASA Astrophysics Data System (ADS)
Douglas, S. R.; Chaplin, K. R.
1991-11-01
The computer program EWE simulates the propagation of elastic waves in solids and liquids. It was applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues.
CDF computing and event data models
Snider, F.D.; /Fermilab
2005-12-01
The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.
Computational disease modeling – fact or fiction?
Tegnér, Jesper N; Compte, Albert; Auffray, Charles; An, Gary; Cedersund, Gunnar; Clermont, Gilles; Gutkin, Boris; Oltvai, Zoltán N; Stephan, Klaas Enno; Thomas, Randy; Villoslada, Pablo
2009-01-01
Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems. PMID:19497118
An improved computational constitutive model for glass
NASA Astrophysics Data System (ADS)
Holmquist, Timothy J.; Johnson, Gordon R.; Gerlach, Charles A.
2017-01-01
In 2011, Holmquist and Johnson presented a model for glass subjected to large strains, high strain rates and high pressures. It was later shown that this model produced solutions that were severely mesh dependent, converging to a solution that was much too strong. This article presents an improved model for glass that uses a new approach to represent the interior and surface strength that is significantly less mesh dependent. This new formulation allows for the laboratory data to be accurately represented (including the high tensile strength observed in plate-impact spall experiments) and produces converged solutions that are in good agreement with ballistic data. The model also includes two new features: one that decouples the damage model from the strength model, providing more flexibility in defining the onset of permanent deformation; the other provides for a variable shear modulus that is dependent on the pressure. This article presents a review of the original model, a description of the improved model and a comparison of computed and experimental results for several sets of ballistic data. Of special interest are computed and experimental results for two impacts onto a single target, and the ability to compute the damage velocity in agreement with experiment data. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied over wide ranges in these experiments, and many of the qualitative features of the model also were varied. The digital organism consistently showed a hyperbolic relation between response and reinforcement rates, and this hyperbolic description of the data was consistently better than the description provided by other, similar, function forms. In addition, the parameters of the hyperbola varied systematically with the quantitative, and some of the qualitative, properties of the model in ways that were consistent with findings from biological organisms. These results suggest that the material events responsible for an organism's responding on RI schedules are computationally equivalent to Darwinian selection by consequences. They also suggest that the computational model developed here is worth pursuing further as a possible dynamic account of behavior. PMID:15357512
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Efficient Calibration of Computationally Intensive Hydrological Models
NASA Astrophysics Data System (ADS)
Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.
2015-12-01
A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.
Computational algebraic geometry of epidemic models
NASA Astrophysics Data System (ADS)
Rodríguez Vega, Martín.
2014-06-01
Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.
Empirical Movement Models for Brain Computer Interfaces.
Matlack, Charles; Chizeck, Howard; Moritz, Chet T
2016-06-30
For brain-computer interfaces (BCIs) which provide the user continuous position control, there is little standardization of performance metrics or evaluative tasks. One candidate metric is Fitts's law, which has been used to describe aimed movements across a range of computer interfaces, and has recently been applied to BCI tasks. Reviewing selected studies, we identify two basic problems with Fitts's law: its predictive performance is fragile, and the estimation of 'information transfer rate' from the model is unsupported. Our main contribution is the adaptation and validation of an alternative model to Fitts's law in the BCI context. We show that the Shannon-Welford model outperforms Fitts's law, showing robust predictive power when target distance and width have disproportionate effects on difficulty. Building on a prior study of the Shannon-Welford model, we show that identified model parameters offer a novel approach to quantitatively assess the role of controldisplay gain in speed/accuracy performance tradeoffs during brain control.
3-Dimensional wireless sensor network localization: A review
NASA Astrophysics Data System (ADS)
Najib, Yasmeen Nadhirah Ahmad; Daud, Hanita; Aziz, Azrina Abd; Razali, Radzuan
2016-11-01
The proliferation of wireless sensor network (WSN) has shifted the focus to 3-Dimensional geometry rather than 2-Dimensional geometry. Since exact location of sensors has been the fundamental issue in wireless sensor network, node localization is essential for any wireless sensor network applications. Most algorithms mainly focus on 2-Dimensional geometry, where the application of this algorithm will decrease the accuracy on 3-Dimensional geometry. The low rank attribute in WSN's node estimation makes the application of nuclear norm minimization as a viable solution for dimensionality reduction problems. This research proposes a novel localization algorithm for 3-Dimensional WSN which is nuclear norm minimization. The node localization is formulated via Euclidean Distance Matrix (EDM) and is then optimized using Nuclear-Norm Minimization (NNM).
Computational Spectrum of Agent Model Simulation
Perumalla, Kalyan S
2010-01-01
The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Computational Modeling of Inflammation and Wound Healing
Ziraldo, Cordelia; Mi, Qi; An, Gary; Vodovotz, Yoram
2013-01-01
Objective Inflammation is both central to proper wound healing and a key driver of chronic tissue injury via a positive-feedback loop incited by incidental cell damage. We seek to derive actionable insights into the role of inflammation in wound healing in order to improve outcomes for individual patients. Approach To date, dynamic computational models have been used to study the time evolution of inflammation in wound healing. Emerging clinical data on histo-pathological and macroscopic images of evolving wounds, as well as noninvasive measures of blood flow, suggested the need for tissue-realistic, agent-based, and hybrid mechanistic computational simulations of inflammation and wound healing. Innovation We developed a computational modeling system, Simple Platform for Agent-based Representation of Knowledge, to facilitate the construction of tissue-realistic models. Results A hybrid equation–agent-based model (ABM) of pressure ulcer formation in both spinal cord-injured and -uninjured patients was used to identify control points that reduce stress caused by tissue ischemia/reperfusion. An ABM of arterial restenosis revealed new dynamics of cell migration during neointimal hyperplasia that match histological features, but contradict the currently prevailing mechanistic hypothesis. ABMs of vocal fold inflammation were used to predict inflammatory trajectories in individuals, possibly allowing for personalized treatment. Conclusions The intertwined inflammatory and wound healing responses can be modeled computationally to make predictions in individuals, simulate therapies, and gain mechanistic insights. PMID:24527362
Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan
2016-08-01
Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.
NASA Astrophysics Data System (ADS)
Richer, E.; Chanteur, G. M.; Modolo, R.; Dubinin, E.
2012-09-01
The reflection of solar wind protons on the Martian bow shock (BS) is investigated by means of three-dimensional simulation models. A two steps approach is adopted to allow a detailed analysis of the reflected population. Firstly, the 3-dimensional hybrid model of Modolo et al. (2005) is used to compute a stationary state of the interaction of the solar wind (SW) with Mars. Secondly, the motion of test particles is followed in the electromagnetic field computed by the hybrid simulation meanwhile detection criteria defined to identify reflected protons are applied. This study demonstrates some effects of the large curvature of a planetary BS on the structure of the foreshock. Reflected protons encounter the BS in a region encompassing parts of the quasi-perpendicular and quasi-parallel shocks, and exit the shock mainly from the quasi-parallel region. The energy spectrum of all reflected protons extends from 0 to almost 15keV. A virtual omnidirectional detector (VOD) is used to compute the local omnidirectional flux of reflected protons at various locations upstream of the BS. Spatial variations of this omnidirectional flux indicate the location and spatial extent of the proton foreshock and demonstrate its shift, increasing with the distance downstream, in the direction opposite to the motional electric field of the SW. Local energy spectra computed from the VOD observations demonstrate the existence of an energy gradient along the direction of the convection electric field.
Utilizing computer models for optimizing classroom acoustics
NASA Astrophysics Data System (ADS)
Hinckley, Jennifer M.; Rosenberg, Carl J.
2002-05-01
The acoustical conditions in a classroom play an integral role in establishing an ideal learning environment. Speech intelligibility is dependent on many factors, including speech loudness, room finishes, and background noise levels. The goal of this investigation was to use computer modeling techniques to study the effect of acoustical conditions on speech intelligibility in a classroom. This study focused on a simulated classroom which was generated using the CATT-acoustic computer modeling program. The computer was utilized as an analytical tool in an effort to optimize speech intelligibility in a typical classroom environment. The factors that were focused on were reverberation time, location of absorptive materials, and background noise levels. Speech intelligibility was measured with the Rapid Speech Transmission Index (RASTI) method.
Global detailed geoid computation and model analysis
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Vincent, S.
1974-01-01
Comparisons and analyses were carried out through the use of detailed gravimetric geoids which we have computed by combining models with a set of 26,000 1 deg x 1 deg mean free air gravity anomalies. The accuracy of the detailed gravimetric geoid computed using the most recent Goddard earth model (GEM-6) in conjunction with the set of 1 deg x 1 deg mean free air gravity anomalies is assessed at + or - 2 meters on the continents of North America, Europe, and Australia, 2 to 5 meters in the Northeast Pacific and North Atlantic areas, and 5 to 10 meters in other areas where surface gravity data are sparse. The R.M.S. differences between this detailed geoid and the detailed geoids computed using the other satellite gravity fields in conjuction with same set of surface data range from 3 to 7 meters.
Integrating interactive computational modeling in biology curricula.
Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A
2015-03-01
While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.
Computational models for synthetic marine infrared clutter
NASA Astrophysics Data System (ADS)
Constantikes, Kim T.; Zysnarski, Adam H.
1996-06-01
The next generation of ship defense missiles will need to engage stealthy, passive, sea-skimming missiles. Detection and guidance will occur against a background of sea surface and horizon which can present significant clutter problems for infrared seekers, particularly when targets are comparatively dim. We need a variety of sea clutter models: statistical image models for signal processing algorithm design, clutter occurrence models for systems effectiveness assessment, and constructive image models for synthesizing very large field-of-view (FOV) images with high spatial and temporal resolution. We have implemented and tested such a constructive model. First principle models of water waves and light transport provide a computationally intensive clutter model implemented as a raytracer. Our models include sea, sky, and solar radiance; reflectance; attenuating atmospheres; constructive solid geometry targets; target and water wave dynamics; and simple sensor image formation.
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
Computer Modelling of Photochemical Smog Formation
ERIC Educational Resources Information Center
Huebert, Barry J.
1974-01-01
Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)
Applications of computational modeling in ballistics
NASA Technical Reports Server (NTRS)
Sturek, Walter B.
1987-01-01
The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.
Informing Mechanistic Toxicology with Computational Molecular Models
Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...
Evaluating computational models of cholesterol metabolism.
Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K
2015-10-01
Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.
Research and Development Project Prioritization - Computer Model
1980-04-01
ble pwm .-ezts or- for aggregvationr of noltinle criteri an-k ordered reqoirenments for- procdzcts. priorities. o) Reducd length lists (dowcn to C...Quantities of 50 and 51 respectively were reduced one each, without loss of generalization , to permit model computation. 69 -A- TABLE 5. (CONcLUDED) Case 10...strived examples from the literature. The model then was and generally failed to i6nd aggregation methods that demonstrated for an extensive R & D
The 3-dimensional construction of the Rae craton, central Canada
NASA Astrophysics Data System (ADS)
Snyder, David B.; Craven, James A.; Pilkington, Mark; Hillier, Michael J.
2015-10-01
Reconstruction of the 3-dimensional tectonic assembly of early continents, first as Archean cratons and then Proterozoic shields, remains poorly understood. In this paper, all readily available geophysical and geochemical data are assembled in a 3-D model with the most accurate bedrock geology in order to understand better the geometry of major structures within the Rae craton of central Canada. Analysis of geophysical observations of gravity and seismic wave speed variations revealed several lithospheric-scale discontinuities in physical properties. Where these discontinuities project upward to correlate with mapped upper crustal geological structures, the discontinuities can be interpreted as shear zones. Radiometric dating of xenoliths provides estimates of rock types and ages at depth beneath sparse kimberlite occurrences. These ages can also be correlated to surface rocks. The 3.6-2.6 Ga Rae craton comprises at least three smaller continental terranes, which "cratonized" during a granitic bloom. Cratonization probably represents final differentiation of early crust into a relatively homogeneous, uniformly thin (35-42 km), tonalite-trondhjemite-granodiorite crust with pyroxenite layers near the Moho. The peak thermotectonic event at 1.86-1.7 Ga was associated with the Hudsonian orogeny that assembled several cratons and lesser continental blocks into the Canadian Shield using a number of southeast-dipping megathrusts. This orogeny metasomatized, mineralized, and recrystallized mantle and lower crustal rocks, apparently making them more conductive by introducing or concentrating sulfides or graphite. Little evidence exists of thin slabs similar to modern oceanic lithosphere in this Precambrian construction history whereas underthrusting and wedging of continental lithosphere is inferred from multiple dipping discontinuities.
Differential Cross Section Kinematics for 3-dimensional Transport Codes
NASA Technical Reports Server (NTRS)
Norbury, John W.; Dick, Frank
2008-01-01
In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
Queuing theory models for computer networks
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
A computational model of bleb formation
Strychalski, Wanda; Guy, Robert D.
2013-01-01
Blebbing occurs when the cytoskeleton detaches from the cell membrane, resulting in the pressure-driven flow of cytosol towards the area of detachment and the local expansion of the cell membrane. Recent interest has focused on cells that use blebbing for migrating through 3D fibrous matrices. In particular, metastatic cancer cells have been shown to use blebs for motility. A dynamic computational model of the cell is presented that includes mechanics of and the interactions between the intracellular fluid, the actin cortex and the cell membrane. The computational model is used to explore the relative roles in bleb formation time of cytoplasmic viscosity and drag between the cortex and the cytosol. A regime of values for the drag coefficient and cytoplasmic viscosity values that match bleb formation timescales is presented. The model results are then used to predict the Darcy permeability and the volume fraction of the cortex. PMID:22294562
Computational Modeling of Vortex Generators for Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, R. V.
2002-01-01
In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.
Concepts to accelerate water balance model computation
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Casper, Markus; Gemmar, Peter
2010-05-01
Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be
Computational modeling of foveal target detection.
Witus, Gary; Ellis, R Darin
2003-01-01
This paper presents the VDM2000, a computational model of target detection designed for use in military developmental test and evaluation settings. The model integrates research results from the fields of early vision, object recognition, and psychophysics. The VDM2000 is image based and provides a criterion-independent measure of target conspicuity, referred to as the vehicle metric (VM). A large data set of human responses to photographs of military vehicles in a field setting was used to validate the model. The VM adjusted by a single calibration parameter accounts for approximately 80% of the variance in the validation data. The primary application of this model is to predict detection of military targets in daylight with the unaided eye. The model also has application to target detection prediction using infrared night vision systems. The model has potential as a tool to evaluate the visual properties of more general task settings.
Molecular Sieve Bench Testing and Computer Modeling
NASA Technical Reports Server (NTRS)
Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.
1995-01-01
The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.
Computational Modeling of Pollution Transmission in Rivers
NASA Astrophysics Data System (ADS)
Parsaie, Abbas; Haghiabi, Amir Hamzeh
2015-08-01
Modeling of river pollution contributes to better management of water quality and this will lead to the improvement of human health. The advection dispersion equation (ADE) is the government equation on pollutant transmission in the river. Modeling the pollution transmission includes numerical solution of the ADE and estimating the longitudinal dispersion coefficient (LDC). In this paper, a novel approach is proposed for numerical modeling of the pollution transmission in rivers. It is related to use both finite volume method as numerical method and artificial neural network (ANN) as soft computing technique together in simulation. In this approach, the result of the ANN for predicting the LDC was considered as input parameter for the numerical solution of the ADE. To validate the model performance in real engineering problems, the pollutant transmission in Severn River has been simulated. Comparison of the final model results with measured data of the Severn River showed that the model has good performance. Predicting the LDC by ANN model significantly improved the accuracy of computer simulation of the pollution transmission in river.
Computational continuum modeling of solder interconnects: Applications
Burchett, S.N.; Neilsen, M.K.; Frear, D.R.
1997-04-01
The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Fb alloy. This alloy has a number of processing advantages (suitable melting point of 183C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes the prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder, which is currently under development, was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of a mini ball grid array solder interconnect. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the thermomechanical response of a mini ball grid array solder interconnects then will be presented.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1993-01-01
Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.
Computational modeling of material aging effects
Fang, H.E.
1996-07-01
Progress is being made in our efforts to develop computational models for predicting material property changes in weapon components due to aging. The first version of a two-dimensional lattice code for modeling thermomechanical fatigue, such as has been observed in solder joints on electronic components removed from the stockpile, has been written and tested. The code does a good qualitative job of presenting intergranular and/or transgranular cracking in a polycrystalline material when under thermomechanical deformation. The current progress is an encouraging start for our long term effort to develop multi-level simulation capabilities, with the technology of high performance computing, for predicting age-related effects on the reliability of weapons.
A computer model of auditory stream segregation.
Beauvois, M W; Meddis, R
1991-08-01
A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.
Computer modeling and simulation of human movement.
Pandy, M G
2001-01-01
Recent interest in using modeling and simulation to study movement is driven by the belief that this approach can provide insight into how the nervous system and muscles interact to produce coordinated motion of the body parts. With the computational resources available today, large-scale models of the body can be used to produce realistic simulations of movement that are an order of magnitude more complex than those produced just 10 years ago. This chapter reviews how the structure of the neuromusculoskeletal system is commonly represented in a multijoint model of movement, how modeling may be combined with optimization theory to simulate the dynamics of a motor task, and how model output can be analyzed to describe and explain muscle function. Some results obtained from simulations of jumping, pedaling, and walking are also reviewed to illustrate the approach.
Multiscale Computational Models of Complex Biological Systems
Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.
2014-01-01
Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
2012-09-04
A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
2016-07-12
A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Computational Biology: Modeling Chronic Renal Allograft Injury
Stegall, Mark D.; Borrows, Richard
2015-01-01
New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury. PMID:26284070
AMAR: A Computational Model of Autosegmental Phonology
1993-10-01
the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t
Computed structures of polyimides model compounds
NASA Technical Reports Server (NTRS)
Tai, H.; Phillips, D. H.
1990-01-01
Using a semi-empirical approach, a computer study was made of 8 model compounds of polyimides. The compounds represent subunits from which NASA Langley Research Center has successfully synthesized polymers for aerospace high performance material application, including one of the most promising, LARC-TPI polymer. Three-dimensional graphic display as well as important molecular structure data pertaining to these 8 compounds are obtained.
Lambros, Maria P.; Kondapalli, Lavanya; Parsa, Cyrus; Mulamalla, Hari Chandana; Orlando, Robert; Pon, Doreen; Huang, Ying; Chow, Moses S. S.
2015-01-01
Qingre Liyan decoction (QYD), a Traditional Chinese medicine, and N-acetyl cysteine (NAC) have been used to prevent radiation induced mucositis. This work evaluates the protective mechanisms of QYD, NAC, and their combination (NAC-QYD) at the cellular and transcriptional level. A validated organotypic model of oral mucosal consisting of a three-dimensional (3D) cell tissue-culture of primary human keratinocytes exposed to X-ray irradiation was used. Six hours after the irradiation, the tissues were evaluated by hematoxylin and eosin (H and E) and a TUNEL assay to assess histopathology and apoptosis, respectively. Total RNA was extracted and used for microarray gene expression profiling. The tissue-cultures treated with NAC-QYD preserved their integrity and showed no apoptosis. Microarray results revealed that the NAC-QYD caused the upregulation of genes encoding metallothioneins, HMOX1, and other components of the Nrf2 pathway, which protects against oxidative stress. DNA repair genes (XCP, GADD45G, RAD9, and XRCC1), protective genes (EGFR and PPARD), and genes of the NFκB pathway were upregulated. Finally, tissue-cultures treated prophylactically with NAC-QYD showed significant downregulation of apoptosis, cytokines and chemokines genes, and constrained damage-associated molecular patterns (DAMPs). NAC-QYD treatment involves the protective effect of Nrf2, NFκB, and DNA repair factors. PMID:25705238
NASA Astrophysics Data System (ADS)
Capar, Laure
2013-04-01
Within the framework of the transnational project GeoMol geophysical and geological information on the entire Molasse Basin and on the Po Basin are gathered to build consistent cross-border 3D geological models based on borehole evidence and seismic data. Benefiting from important progress in seismic processing, these new models will provide some answers to various questions regarding the usage of subsurface resources, as there are geothermal energy, CO2 and gas storage, oil and gas production, and support decisions-making to national and local administrations as well as to industries. More than 28 000 km of 2D seismic lines are compiled reprocessed and harmonized. This work faces various problems like the vertical drop of more than 700 meters between West and East of the Molasse Basin and to al lesser extent in the Po Plain, the heterogeneities of the substratum, the large disparities between the period and parameters of seismic acquisition, and depending of their availability, the use of two types of seismic data, raw and processed seismic data. The main challenge is to harmonize all lines at the same reference level, amplitude and step of signal processing from France to Austria, spanning more than 1000 km, to avoid misfits at crossing points between seismic lines and artifacts at the country borders, facilitating the interpretation of the various geological layers in the Molasse Basin and Po Basin. A generalized stratigraphic column for the two basins is set up, representing all geological layers relevant to subsurface usage. This stratigraphy constitutes the harmonized framework for seismic reprocessing. In general, processed seismic data is available on paper at stack stage and the mandatory information to take these seismic lines to the final stage of processing, the migration step, are datum plane and replacement velocity. However several datum planes and replacement velocities were used during previous processing projects. Our processing sequence is to
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.
Fast Apriori-based Graph Mining Algorithm and application to 3-dimensional Structure Analysis
NASA Astrophysics Data System (ADS)
Nishimura, Yoshio; Washio, Takashi; Yoshida, Tetsuya; Motoda, Hiroshi; Inokuchi, Akihiro; Okada, Takashi
Apriori-based Graph Mining (AGM) algorithm efficiently extracts all the subgraph patterns which frequently appear in graph structured data. The algorithm can deal with general graph structured data with multiple labels of vartices and edges, and is capable of analyzing the topological structure of graphs. In this paper, we propose a new method to analyze graph structured data for a 3-dimensional coordinate by AGM. In this method the distance between each vertex of a graph is calculated and added to the edge label so that AGM can handle 3-dimensional graph structured data. One problem in our approach is that the number of edge labels increases, which results in the increase of computational time to extract subgraph patterns. To alleviate this problem, we also propose a faster algorithm of AGM by adding an extra constraint to reduce the number of generated candidates for seeking frequent subgraphs. Chemical compounds with dopamine antagonist in MDDR database were analyzed by AGM to characterize their 3-dimensional chemical structure and correlation with physiological activity.
COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS
Ibrahim, Essam A
2013-01-09
Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.
Computational fire modeling for aircraft fire research
Nicolette, V.F.
1996-11-01
This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.
Computational acoustic modeling of cetacean vocalizations
NASA Astrophysics Data System (ADS)
Gurevich, Michael Dixon
A framework for computational acoustic modeling of hypothetical vocal production mechanisms in cetaceans is presented. As a specific example, a model of a proposed source in the larynx of odontocetes is developed. Whales and dolphins generate a broad range of vocal sounds, but the exact mechanisms they use are not conclusively understood. In the fifty years since it has become widely accepted that whales can and do make sound, how they do so has remained particularly confounding. Cetaceans' highly divergent respiratory anatomy, along with the difficulty of internal observation during vocalization have contributed to this uncertainty. A variety of acoustical, morphological, ethological and physiological evidence has led to conflicting and often disputed theories of the locations and mechanisms of cetaceans' sound sources. Computational acoustic modeling has been used to create real-time parametric models of musical instruments and the human voice. These techniques can be applied to cetacean vocalizations to help better understand the nature and function of these sounds. Extensive studies of odontocete laryngeal morphology have revealed vocal folds that are consistently similar to a known but poorly understood acoustic source, the ribbon reed. A parametric computational model of the ribbon reed is developed, based on simplified geometrical, mechanical and fluid models drawn from the human voice literature. The physical parameters of the ribbon reed model are then adapted to those of the odontocete larynx. With reasonable estimates of real physical parameters, both the ribbon reed and odontocete larynx models produce sounds that are perceptually similar to their real-world counterparts, and both respond realistically under varying control conditions. Comparisons of acoustic features of the real-world and synthetic systems show a number of consistencies. While this does not on its own prove that either model is conclusively an accurate description of the source, it
Computational Modeling in Structural Materials Processing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.
Computational Fluid Dynamics Modeling of Bacillus anthracis ...
Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict
3-dimensional (3D) fabricated polymer based drug delivery systems.
Moulton, Simon E; Wallace, Gordon G
2014-11-10
Drug delivery from 3-dimensional (3D) structures is a rapidly growing area of research. It is essential to achieve structures wherein drug stability is ensured, the drug loading capacity is appropriate and the desired controlled release profile can be attained. Attention must also be paid to the development of appropriate fabrication machinery that allows 3D drug delivery systems (DDS) to be produced in a simple, reliable and reproducible manner. The range of fabrication methods currently being used to form 3D DDSs include electrospinning (solution and melt), wet-spinning and printing (3-dimensional). The use of these techniques enables production of DDSs from the macro-scale down to the nano-scale. This article reviews progress in these fabrication techniques to form DDSs that possess desirable drug delivery kinetics for a wide range of applications.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
Computer model of in situ leaching hydrology
Not Available
1981-05-01
A computer program developed by the US Bureau of Mines simulates the hydrologic activity associated with in situ mining. Its purpose is to determine the site specific flow behavior of leachants and groundwater during development, production, and resotration phases of an in situ leaching operations. Model capabilities include arbitrary well patterns and pumping schedules, partially penetrating well screens, directionally anisotropic permeability and natural groundwater flow, in either leaky or nonleaky, confined aquifers and under steady state or time dependent flow conditions. In addition to extensive laboratory testing, the Twin Cites Research Center has closely monitored the application of this model at three different mine sites, and at each site, the solution breakthrough time and the hydraulic head at observation wells were used to tune the model. The model was then used satisfactorily to assess suitability of various well configurations and pumping schedules, in terms of fluid dispersion within the ore pod and fluid excursions into the surrounding aquifer. (JMT)
Computational Statistical Methods for Social Network Models
Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael
2013-01-01
We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720
A neural computational model of incentive salience.
Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne
2009-07-01
Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating
Computer model of tetrahedral amorphous diamond
NASA Astrophysics Data System (ADS)
Djordjević, B. R.; Thorpe, M. F.; Wooten, F.
1995-08-01
We computer generate a model of amorphous diamond using the Wooten-Weaire method, with fourfold coordination everywhere. We investigate two models: one where four-membered rings are allowed and the other where the four-membered rings are forbidden; each model consisting of 4096 atoms. Starting from the perfect diamond crystalline structure, we first randomize the structure by introducing disorder through random bond switches at a sufficiently high temperature. Subsequently, the temperature is reduced in stages, and the topological and geometrical relaxation of the structure takes place using the Keating potential. After a long annealing process, a random network of comparatively low energy is obtained. We calculate the pair distribution function, mean bond angle, rms angular deviation, rms bond length, rms bond-length deviation, and ring statistics for the final relaxed structures. We minimize the total strain energy by adjusting the density of the sample. We compare our results with similar computer-generated models for amorphous silicon, and with experimental measurement of the structure factor for (predominantly tetrahedral) amorphous carbon.
Computational Model of Fluorine-20 Experiment
NASA Astrophysics Data System (ADS)
Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne
2015-10-01
The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.
Computational Modeling of Distal Protection Filters
Siewiorek, Gail M.; Finol, Ender A.
2010-01-01
Purpose: To quantify the relationship between velocity and pressure gradient in a distal protection filter (DPF) and to determine the feasibility of modeling a DPF as a permeable surface using computational fluid dynamics (CFD). Methods: Four DPFs (Spider RX, FilterWire EZ, RX Accunet, and Emboshield) were deployed in a single tube representing the internal carotid artery (ICA) in an in vitro flow apparatus. Steady flow of a blood-like solution was circulated with a peristaltic pump and compliance chamber. The flow rate through each DPF was measured at physiological pressure gradients, and permeability was calculated using Darcy's equation. Two computational models representing the RX Accunet were created: an actual representation of the filter geometry and a circular permeable surface. The permeability of RX Accunet was assigned to the surface, and CFD simulations were conducted with both models using experimentally derived boundary conditions. Results: Spider RX had the largest permeability while RX Accunet was the least permeable filter. CFD modeling of RX Accunet and the permeable surface resulted in excellent agreement with the experimental measurements of velocity and pressure gradient. However, the permeable surface model did not accurately reproduce local flow patterns near the DPF deployment site. Conclusion: CFD can be used to model DPFs, yielding global flow parameters measured with bench-top experiments. CFD models of the detailed DPF geometry could be used for “virtual testing” of device designs under simulated flow conditions, which would have potential benefits in decreasing the number of design iterations leading up to in vivo testing. PMID:21142490
3-Dimensional modeling of protein structures distinguishes closely related phytoplasmas
Technology Transfer Automated Retrieval System (TEKTRAN)
Phytoplasmas (formerly mycoplasmalike organisms, MLOs) are cell wall-less bacteria that inhabit phloem tissue of plants and are transmitted from plant-to-plant by phloem-feeding insects. Numerous diseases affecting hundreds of plant species in many botanical families are attributed to infections by...
Reiner, Jeffrey S; Brindle, Kathleen A; Khati, Nadia Juliet
2012-12-01
The intrauterine contraceptive device (IUD) is one of the most widely used reversible contraception methods throughout the world. With advancing technology, it has rapidly gained acceptance through its increased effectiveness and practicality compared with more invasive means such as laparoscopic tubal ligation. This pictorial essay will present the IUDs most commonly used today. It will illustrate both normal and abnormal positions of IUDs across all cross-sectional imaging modalities including 2-dimensional ultrasound, computed tomography, and magnetic resonance imaging, with a focus on the emerging role of 3-dimensional ultrasound as the modality of choice.
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...
Computer model for analyzing sodium cold traps
McPheeters, C C; Raue, D J
1983-05-01
A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions. The calculated pressure drop as a function of impurity mass content determines the capacity of the cold trap. The accuracy of the model was checked by comparing calculated mass distributions with experimentally determined mass distributions from literature publications and with results from our own cold trap experiments. The comparisons were excellent in all cases. A parametric study was performed to determine which design variables are most important in maximizing cold trap capacity.
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Computational modeling of Li-ion batteries
NASA Astrophysics Data System (ADS)
Grazioli, D.; Magri, M.; Salvadori, A.
2016-12-01
This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.
Computer Model Used to Help Customize Medicine
NASA Technical Reports Server (NTRS)
Stauber, Laurel J.; Veris, Jenise
2001-01-01
Dr. Radhakrishnan, a researcher at the NASA Glenn Research Center, in collaboration with biomedical researchers at the Case Western Reserve University School of Medicine and Rainbow Babies and Children's Hospital, is developing computational models of human physiology that quantitate metabolism and its regulation, in both healthy and pathological states. These models can help predict the effects of stresses or interventions, such as drug therapies, and contribute to the development of customized medicine. Customized medical treatment protocols can give more comprehensive evaluations and lead to more specific and effective treatments for patients, reducing treatment time and cost. Commercial applications of this research may help the pharmaceutical industry identify therapeutic needs and predict drug-drug interactions. Researchers will be able to study human metabolic reactions to particular treatments while in different environments as well as establish more definite blood metabolite concentration ranges in normal and pathological states. These computational models may help NASA provide the background for developing strategies to monitor and safeguard the health of astronauts and civilians in space stations and colonies. They may also help to develop countermeasures that ameliorate the effects of both acute and chronic space exposure.
Dual-code quantum computation model
NASA Astrophysics Data System (ADS)
Choi, Byung-Soo
2015-08-01
In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.
Model-based neuroimaging for cognitive computing.
Poznanski, Roman R
2009-09-01
The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.
Some queuing network models of computer systems
NASA Technical Reports Server (NTRS)
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
Computational modeling of the amphibian thyroid axis ...
In vitro screening of chemicals for bioactivity together with computational modeling are beginning to replace animal toxicity testing in support of chemical risk assessment. To facilitate this transition, an amphibian thyroid axis model has been developed to describe thyroid homeostasis during Xenopus laevis pro-metamorphosis. The model simulates the dynamic relationships of normal thyroid biology throughout this critical period of amphibian development and includes molecular initiating events (MIEs) for thyroid axis disruption to allow in silico simulations of hormone levels following chemical perturbations. One MIE that has been formally described using the adverse outcome pathway (AOP) framework is thyroperoxidase (TPO) inhibition. The goal of this study was to refine the model parameters and validate model predictions by generating dose-response and time-course biochemical data following exposure to three TPO inhibitors, methimazole, 6-propylthiouracil and 2-mercaptobenzothiazole. Key model variables including gland and blood thyroid hormone (TH) levels were compared to empirical values measured in biological samples at 2, 4, 7 and 10 days following initiation of exposure at Nieuwkoop and Faber (NF) stage 54 (onset of pro-metamorphosis). The secondary objective of these studies was to relate depleted blood TH levels to delayed metamorphosis, the adverse apical outcome. Delayed metamorphosis was evaluated by continuing exposure with a subset of larvae until a
Computational social dynamic modeling of group recruitment.
Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.
2004-01-01
The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.
Park, Jong-Tae; Lee, Jae-Gi; Won, Sung-Yoon; Lee, Sang-Hee; Cha, Jung-Yul; Kim, Hee-Jin
2013-07-01
Masticatory muscles are closely involved in mastication, pronunciation, and swallowing, and it is therefore important to study the specific functions and dynamics of the mandibular and masticatory muscles. However, the shortness of muscle fibers and the diversity of movement directions make it difficult to study and simplify the dynamics of mastication. The purpose of this study was to use 3-dimensional (3D) simulation to observe the functions and movements of each of the masticatory muscles and the mandible while chewing. To simulate the masticatory movement, computed tomographic images were taken from a single Korean volunteer (30-year-old man), and skull image data were reconstructed in 3D (Mimics; Materialise, Leuven, Belgium). The 3D-reconstructed masticatory muscles were then attached to the 3D skull model. The masticatory movements were animated using Maya (Autodesk, San Rafael, CA) based on the mandibular motion path. During unilateral chewing, the mandible was found to move laterally toward the functional side by contracting the contralateral lateral pterygoid and ipsilateral temporalis muscles. During the initial mouth opening, only hinge movement was observed at the temporomandibular joint. During this period, the entire mandible rotated approximately 13 degrees toward the bicondylar horizontal plane. Continued movement of the mandible to full mouth opening occurred simultaneously with sliding and hinge movements, and the mandible rotated approximately 17 degrees toward the center of the mandibular ramus. The described approach can yield data for use in face animation and other simulation systems and for elucidating the functional components related to contraction and relaxation of muscles during mastication.
Teaching 1H NMR Spectrometry Using Computer Modeling.
ERIC Educational Resources Information Center
Habata, Yoichi; Akabori, Sadatoshi
2001-01-01
Molecular modeling by computer is used to display stereochemistry, molecular orbitals, structure of transition states, and progress of reactions. Describes new ideas for teaching 1H NMR spectroscopy using computer modeling. (Contains 12 references.) (ASK)
Computational Model Builder for Multi-Dimensional Models
2015-08-12
In addition tools to support both the Environmental Simulation Effort (ES) and the Engineered Resilient Systems (ERS). Various ParaView extensions...support ERDC’s Environmental Simulation (ES) effort as well as ERDC’s Engineered Resilient Systems (ERS) effort. Computational Model Builder Suite...Distribution Statement A: Approved for public release; distribution unlimited. UNCLASSIFIED Engineered Resilient Systems (ERS) Support
Computational models of intergroup competition and warfare.
Letendre, Kenneth; Abbott, Robert G.
2011-11-01
This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.
Computer modeling of thermoelectric generator performance
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.; Shields, V.
1982-01-01
Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.
Electromagnetic physics models for parallel computing architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-11-21
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.
Electromagnetic physics models for parallel computing architectures
Amadio, G.; Ananya, A.; Apostolakis, J.; ...
2016-11-21
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less
Electromagnetic Physics Models for Parallel Computing Architectures
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2016-10-01
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
Computational continuum modeling of solder interconnects
Burchett, S.N.; Neilsen, M.K.; Frear, D.R.; Stephens, J.J.
1997-03-01
The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Pb alloy. This alloy has a number of processing advantages (suitable melting point of 183 C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder which is currently in development was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of various leadless chip carrier solder interconnects to determine the effects of variations in geometry and loading. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the effect of geometry and loading variations on leadless chip carrier solder interconnects then will be presented.
Radiative cooling computed for model atmospheres
NASA Astrophysics Data System (ADS)
Eriksson, T. S.; Granqvist, C. G.
1982-12-01
The radiative cooling power and temperature drop of horizontal surfaces are evaluated on the basis of calculations of spectral radiance from model atmospheres representative of various climatic conditions. Calculations of atmospheric radiance from the zenith and from off-zenith angles were performed with the LOWTRAN 5 atmospheric transmittance/radiance computer code (Kneizys et al., 1980) for model atmospheres corresponding to the tropics, midlatitude summer, midlatitude winter, subarctic summer, subarctic winter and the 1962 U.S. standard atmosphere. Comparison of the computed spectral radiance curves with the radiative fluxes from blackbody surfaces and ideal infrared-selective surfaces (having reflectance in the 8-13 micron range and unity reflectance elsewhere) at various ambient-surface temperature differences shows cooling powers to lie between 58 and 113 W/sq m at ambient temperature for a freely radiating surface, with maximum temperature differences of 11-21 C for a blackbody and 18-33 C for an infrared-selective surface. Both cooling powers and temperature differences were higher for surfaces exposed only to atmospheric zenith radiance. In addition, water vapor content is found to affect strongly the radiative cooling, while ozone and aerosol contents had little effect.
Multiscale computational modelling of the heart
NASA Astrophysics Data System (ADS)
Smith, N. P.; Nickerson, D. P.; Crampin, E. J.; Hunter, P. J.
A computational framework is presented for integrating the electrical, mechanical and biochemical functions of the heart. Finite element techniques are used to solve the large-deformation soft tissue mechanics using orthotropic constitutive laws based in the measured fibre-sheet structure of myocardial (heart muscle) tissue. The reaction-diffusion equations governing electrical current flow in the heart are solved on a grid of deforming material points which access systems of ODEs representing the cellular processes underlying the cardiac action potential. Navier-Stokes equations are solved for coronary blood flow in a system of branching blood vessels embedded in the deforming myocardium and the delivery of oxygen and metabolites is coupled to the energy-dependent cellular processes. The framework presented here for modelling coupled physical conservation laws at the tissue and organ levels is also appropriate for other organ systems in the body and we briefly discuss applications to the lungs and the musculo-skeletal system. The computational framework is also designed to reach down to subcellular processes, including signal transduction cascades and metabolic pathways as well as ion channel electrophysiology, and we discuss the development of ontologies and markup language standards that will help link the tissue and organ level models to the vast array of gene and protein data that are now available in web-accessible databases.
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct
Computational Modeling and Simulation of Genital Tubercle ...
Hypospadias is a developmental defect of urethral tube closure that has a complex etiology. Here, we describe a multicellular agent-based model of genital tubercle development that simulates urethrogenesis from the urethral plate stage to urethral tube closure in differentiating male embryos. The model, constructed in CompuCell3D, implemented spatially dynamic signals from SHH, FGF10, and androgen signaling pathways. These signals modulated stochastic cell behaviors, such as differential adhesion, cell motility, proliferation, and apoptosis. Urethral tube closure was an emergent property of the model that was quantitatively dependent on SHH and FGF10 induced effects on mesenchymal proliferation and endodermal apoptosis, ultimately linked to androgen signaling. In the absence of androgenization, simulated genital tubercle development defaulted to the female condition. Intermediate phenotypes associated with partial androgen deficiency resulted in incomplete closure. Using this computer model, complex relationships between urethral tube closure defects and disruption of underlying signaling pathways could be probed theoretically in multiplex disturbance scenarios and modeled into probabilistic predictions for individual risk for hypospadias and potentially other developmental defects of the male genital tubercle. We identify the minimal molecular network that determines the outcome of male genital tubercle development in mice.
Computer Modeling of Non-Isothermal Crystallization
NASA Technical Reports Server (NTRS)
Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.
1996-01-01
A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.
3-dimensional electronic structures of CaC6
NASA Astrophysics Data System (ADS)
Kyung, Wonshik; Kim, Yeongkwan; Han, Garam; Leem, Choonshik; Kim, Junsung; Kim, Yeongwook; Kim, Keunsu; Rotenberg, Eli; Kim, Changyoung; Postech Collaboration; Advanced Light Source Collaboration; Yonsei University Team
2014-03-01
There is still remaining issues on origin of superconductivity in graphite intercalation compounds, especially CaC6 because of its relatively high transition temperature than other GICs. There are two competing theories on where the superconductivity occurs in this material; intercalant metal or charge doped graphene layer. To elucidate this issue, it is necessary to confirm existence of intercalant driven band. Therefore, we performed 3 dimensional electronic structure studies with ARPES to find out 3d dispersive intercalant band. However, we could not observe it, instead observed 3d dispersive carbon band. This support the aspect of charge doped graphene superconductivity more than intercalant driving aspect.
Statistics, Computation, and Modeling in Cosmology
NASA Astrophysics Data System (ADS)
Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology
2017-01-01
Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
ERIC Educational Resources Information Center
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
Computational modeling of intraocular gas dynamics
NASA Astrophysics Data System (ADS)
Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.
2015-12-01
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.
Computational modeling of intraocular gas dynamics.
Noohi, P; Abdekhodaie, M J; Cheng, Y L
2015-12-18
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.
Continuum and computational modeling of flexoelectricity
NASA Astrophysics Data System (ADS)
Mao, Sheng
Flexoelectricity refers to the linear coupling of strain gradient and electric polarization. Early studies of this subject mostly look at liquid crystals and biomembranes. Recently, the advent of nanotechnology revealed its importance also in solid structures, such as flexible electronics, thin films, energy harvesters, etc. The energy storage function of a flexoelectric solid depends not only on polarization and strain, but also strain-gradient. This is our basis to formulate a consistent model of flexoelectric solids under small deformation. We derive a higher-order Navier equation for linear isotropic flexoelectric materials which resembles that of Mindlin in gradient elasticity. Closed-form solutions can be obtained for problems such as beam bending, pressurized tube, etc. Flexoelectric coupling can be enhanced in the vicinity of defects due to strong gradients and decay away in far field. We quantify this expectation by computing elastic and electric fields near different types of defects in flexoelectric solids. For point defects, we recover some well-known results of non-local theories. For dislocations, we make connections with experimental results on NaCl, ice, etc. For cracks, we perform a crack-tip asymptotic analysis and the results share features from gradient elasticity and piezoelectricity. We compute the J integral and use it for determining fracture criteria. Conventional finite element methods formulated solely on displacement are inadequate to treat flexoelectric solids due to higher order governing equations. Therefore, we introduce a mixed formulation which uses displacement and displacement-gradient as separate variables. Their known relation is constrained in a weighted integral sense. We derive a variational formulation for boundary value problems for piezeo- and/or flexoelectric solids. We validate this computational framework against exact solutions. With this method more complex problems, including a plate with an elliptical hole
Preliminary Phase Field Computational Model Development
Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep
2014-12-15
This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in
Incorporating a 3-dimensional printer into the management of early-stage cervical cancer.
Baek, Min-Hyun; Kim, Dae-Yeon; Kim, Namkug; Rhim, Chae Chun; Kim, Jong-Hyeok; Nam, Joo-Hyun
2016-08-01
We used a 3-dimensional (3D) printer to create anatomical replicas of real lesions and tested its application in cervical cancer. Our study patient decided to undergo radical hysterectomy after seeing her 3D model which was then used to plan and simulate this surgery. Using 3D printers to create patient-specific 3D tumor models may aid cervical cancer patients make treatment decisions. This technology will lead to better surgical and oncological outcomes for cervical cancer patients. J. Surg. Oncol. 2016;114:150-152. © 2016 Wiley Periodicals, Inc.
Cummings, P. T.
2010-02-08
This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.
A computer model of the earth's magnetosphere
NASA Astrophysics Data System (ADS)
Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha
1988-03-01
The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.
A computer model of the earth's magnetosphere
NASA Technical Reports Server (NTRS)
Ogino, Tatsuki; Walker, Raymond J.; Ashour-Abdalla, Maha
1988-01-01
The interaction of the solar wind with the earth magnetosphere is investigated theoretically by means of three-dimensional MHD simulations, with a focus on the effects of changes in the Bz component of the IMF. A high-resolution (0.5 earth radii) version of the model of Ogino et al. (1986) is employed, and the results are presented in a series of computer-generated maps and diagrams and characterized in detail. Bz of -5 nT is found to be associated with dipolar magnetic-field lines near the earth and very concave lines in the magnetotail, while Bz of +5 nT produces a narrow finger of closed field lines extending into the polar cap. Both IMF orientations have sunward convection near the noon-midnight meridian and region-1-type field-aligned currents on both sides of the plasma-sheet extension.
A computational model for dynamic vision
NASA Technical Reports Server (NTRS)
Moezzi, Saied; Weymouth, Terry E.
1990-01-01
This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.
Comprehensive silicon solar cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.
Modeling groundwater flow on massively parallel computers
Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.
1994-12-31
The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.
1992-01-01
The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.
Computational model of heterogeneous heating in melanin
NASA Astrophysics Data System (ADS)
Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.
2015-03-01
Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
Gravothermal Star Clusters - Theory and Computer Modelling
NASA Astrophysics Data System (ADS)
Spurzem, Rainer
2010-11-01
In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.
Lee, S.Y.
1997-06-01
One of the interim storage configurations being considered for aluminum-clad foreign research reactor fuel, such as the Material and Testing Reactor (MTR) design, is in a dry storage facility. To support design studies of storage options, a computational and experimental program was conducted at the Savannah River Site (SRS). The objective was to develop computational fluid dynamics (CFD) models which would be benchmarked using data obtained from a full scale heat transfer experiment conducted in the SRS Experimental Thermal Fluids Laboratory. The current work documents the CFD approach and presents comparison of results with experimental data. CFDS-FLOW3D (version 3.3) CFD code has been used to model the 3-dimensional convective velocity and temperature distributions within a single dry storage canister of MTR fuel elements. For the present analysis, the Boussinesq approximation was used for the consideration of buoyancy-driven natural convection. Comparison of the CFD code can be used to predict reasonably accurate flow and thermal behavior of a typical foreign research reactor fuel stored in a dry storage facility.
Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo
2011-09-01
Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene.
Climate Change Modeling:Computational Opportunities and Challenges
Wang, Dali; Post, Wilfred M; Wilson, Bruce E
2011-01-01
High- delity climate models are the workhorses of modern climate change sciences. In this article, the authors focus on several computational issues associated with climate change modeling, covering simulation methodologies, temporal and spatial modeling restrictions, the role of high-end computing, as well as the importance of data-driven regional climate impact modeling.
Computational Modeling of Uranium Hydriding and Complexes
Balasubramanian, K; Siekhaus, W J; McLean, W
2003-02-03
et al. have studied U-hydriding in ultrahigh vacuum and obtained the linear rate data over a wide range of temperatures and pressures. They found reversible hydrogen sorption on the UH{sub 3} reaction product from kinetic effects at 21 C. This demonstrates restarting of the hydriding process in the presence of UH{sub 3} reaction product. DeMint and Leckey have shown that Si impurities dramatically accelerate the U-hydriding rates. We report our recent results of relativistic computations that vary from complete active space multi-configuration interaction (CAS-MCSCF) followed by multi-reference configuration interaction (MRSDCI) computations that included up to 50 million configurations for modeling of uranium-hydriding with cluster models will be presented.
Computer modeling of complete IC fabrication process
NASA Astrophysics Data System (ADS)
Dutton, Robert W.
1987-05-01
The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.
Computational modeling of acute myocardial infarction
Sáez, P.; Kuhl, E.
2015-01-01
Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step towards simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size. PMID:26583449
Computational modeling of acute myocardial infarction.
Sáez, P; Kuhl, E
2016-01-01
Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size.
Computer modelling of metal - oxide interfaces
NASA Astrophysics Data System (ADS)
Purton, J.; Parker, S. C.; Bullett, D. W.
1997-07-01
We have used atomistic simulations to model oxide - metal interfaces. We have, for the first time, allowed the atoms on both sides of the interface to relax. The efficiency of the computational method means that calculations can be performed on complex interfaces containing several thousand atoms and do not require an arbitrary definition of the image plane to model the electrostatics across the dielectric discontinuity. We demonstrate the viability of the approach and the effect of relaxation on a range of MgO - Ag interfaces. Defective and faceted interfaces, as well as the ideal case, have been studied. The latter was chosen for comparison with previous theoretical calculations and experimental results. The wetting angle 0953-8984/9/27/004/img7 and work of adhesion 0953-8984/9/27/004/img8 for MgO{100} - Ag{100} are in reasonable agreement with experiment. As with ab initio electronic structure calculations the silver atoms have been shown to favour the position above the oxygen site.
Random matrix model of adiabatic quantum computing
Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.
2005-05-15
We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.
3-Dimensional Immersive Visualization For Regional Water Planning
NASA Astrophysics Data System (ADS)
Block, J.; Razdan, A.; Shangraw, R.; Arrowsmith, R.
2005-12-01
As the population in the southwestern US grows, water planning requires increasingly creative solutions to manage valuable water resources at the local and regional level. The East Valley Water Forum (EVWF) is a regional cooperative of water providers east of Phoenix, Arizona, designing their water management plan for the next 25 years. Water resources in this region come from the Colorado River, the Salt River Project, groundwater, and other local and regional sources which provide resources that are subject to climatic variability. In order to best understand the physical and political relationships between water resources and their management, the Arizona Department of Water Resources (ADWR) analyzes hydrologic data in the region using USGS's MODFLOW software, which computes the status of groundwater resources in the region. However, in order to improve policy decision making using MODFLOW outputs, a comprehensive scientific understanding of the inputs, outputs and their uncertainties is needed. These uncertainties include intrinsic hydrologic uncertainty as well uncertainties in external controls such as drought and urban growth. The Decision Theater (DT) is a new facility at Arizona State University (ASU) that specializes in high resolution 3D immersive visualization of scientific data and models. The facility includes a room with a seven-paneled screen surrounding the viewers by 260 degrees for an immersive experience. It is an innovative tool for visualization of datasets from disparate sources for synthesis of complex spatial problems, and its staff is collaborating with the EVWF and the Bureau of Reclamation to better visualize their modeled water supply and demand scenarios under various drought conditions. The space provides a neutral setting for a workflow of data and model integration in which groups can iteratively assess, interact with, and gain intuition about the relevant data and models. This data integration results in visualizations that
Computational modeling of composite material fires.
Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.
2010-10-01
condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.
Computational and Modeling Strategies for Cell Motility
NASA Astrophysics Data System (ADS)
Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory
A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback
Computational modeling of solid oxide fuel cell
NASA Astrophysics Data System (ADS)
Penmetsa, Satish Kumar
In the ongoing search for alternative and environmentally friendly power generation facilities, the solid oxide fuel cell (SOFC) is considered one of the prime candidates for the next generation of energy conversion devices due to its capability to provide environmentally friendly and highly efficient power generation. Moreover, SOFCs are less sensitive to composition of fuel as compared to other types of fuel cells, and internal reforming of the hydrocarbon fuel cell can be performed because of higher operating temperature range of 700°C--1000°C. This allows us to use different types of hydrocarbon fuels in SOFCs. The objective of this study is to develop a three-dimensional computational model for the simulation of a solid oxide fuel cell unit to analyze the complex internal transport mechanisms and sensitivity of the cell with different operating conditions, and also to develop SOFC with higher operating current density with a more uniform gas distributions in the electrodes and with lower ohmic losses. This model includes mass transfer processes due to convection and diffusion in the gas flow channels based on the Navier-Stokes equations as well as combined diffusion and advection in electrodes using Brinkman's hydrodynamic equation and associated electrochemical reactions in the trilayer of the SOFC. Gas transport characteristics in terms of three-dimensional spatial distributions of reactant gases and their effects on electrochemical reactions at the electrode-electrolyte interface, and in the resulting polarizations, are evaluated for varying pressure conditions. Results show the significance of the Brinkman's hydrodynamic model in electrodes to achieve more uniform gas concentration distributions while using a higher operating pressure and over a higher range of operating current densities.
Computer formulations of aircraft models for simulation studies
NASA Technical Reports Server (NTRS)
Howard, J. C.
1979-01-01
Recent developments in formula manipulation compilers and the design of several symbol manipulation languages, enable computers to be used for symbolic mathematical computation. A computer system and language that can be used to perform symbolic manipulations in an interactive mode are used to formulate a mathematical model of an aeronautical system. The example demonstrates that once the procedure is established, the formulation and modification of models for simulation studies can be reduced to a series of routine computer operations.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
A 3-dimensional finite-difference method for calculating the dynamic coefficients of seals
NASA Technical Reports Server (NTRS)
Dietzen, F. J.; Nordmann, R.
1989-01-01
A method to calculate the dynamic coefficients of seals with arbitrary geometry is presented. The Navier-Stokes equations are used in conjunction with the k-e turbulence model to describe the turbulent flow. These equations are solved by a full 3-dimensional finite-difference procedure instead of the normally used perturbation analysis. The time dependence of the equations is introduced by working with a coordinate system rotating with the precession frequency of the shaft. The results of this theory are compared with coefficients calculated by a perturbation analysis and with experimental results.
Luján, J. Luis; Noecker, Angela M.; Butson, Christopher R.; Cooper, Scott E.; Walter, Benjamin L.; Vitek, Jerrold L.; McIntyre, Cameron C.
2009-01-01
Objective Deep brain stimulation (DBS) surgeries commonly rely on brain atlases and microelectrode recordings (MER) to help identify the target location for electrode implantation. We present an automated method for optimally fitting a 3-dimensional brain atlas to intraoperative MER and predicting a target DBS electrode location in stereotactic coordinates for the patient. Methods We retrospectively fit a 3-dimensional brain atlas to MER points from 10 DBS surgeries targeting the subthalamic nucleus (STN). We used a constrained optimization algorithm to maximize the MER points correctly fitted (i.e., contained) within the appropriate atlas nuclei. We compared our optimization approach to conventional anterior commissure-posterior commissure (AC/PC) scaling, and to manual fits performed by four experts. A theoretical DBS electrode target location in the dorsal STN was customized to each patient as part of the fitting process and compared to the location of the clinically defined therapeutic stimulation contact. Results The human expert and computer optimization fits achieved significantly better fits than the AC/PC scaling (80, 81, and 41% of correctly fitted MER, respectively). However, the optimization fits were performed in less time than the expert fits and converged to a single solution for each patient, eliminating interexpert variance. Conclusions and Significance DBS therapeutic outcomes are directly related to electrode implantation accuracy. Our automated fitting techniques may aid in the surgical decision-making process by optimally integrating brain atlas and intraoperative neurophysiological data to provide a visual guide for target identification. PMID:19556832
NASA Astrophysics Data System (ADS)
Rahn, Helene; Alexiou, Christoph; Trahms, Lutz; Odenbach, Stefan
2014-06-01
X-ray computed tomography is nowadays used for a wide range of applications in medicine, science and technology. X-ray microcomputed tomography (XμCT) follows the same principles used for conventional medical CT scanners, but improves the spatial resolution to a few micrometers. We present an example of an application of X-ray microtomography, a study of 3-dimensional biodistribution, as along with the quantification of nanoparticle content in tumoral tissue after minimally invasive cancer therapy. One of these minimal invasive cancer treatments is magnetic drug targeting, where the magnetic nanoparticles are used as controllable drug carriers. The quantification is based on a calibration of the XμCT-equipment. The developed calibration procedure of the X-ray-μCT-equipment is based on a phantom system which allows the discrimination between the various gray values of the data set. These phantoms consist of a biological tissue substitute and magnetic nanoparticles. The phantoms have been studied with XμCT and have been examined magnetically. The obtained gray values and nanoparticle concentration lead to a calibration curve. This curve can be applied to tomographic data sets. Accordingly, this calibration enables a voxel-wise assignment of gray values in the digital tomographic data set to nanoparticle content. Thus, the calibration procedure enables a 3-dimensional study of nanoparticle distribution as well as concentration.
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
Modelling, abstraction, and computation in systems biology: A view from computer science.
Melham, Tom
2013-04-01
Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.
NASA Astrophysics Data System (ADS)
Bowles, C.
2013-12-01
Ecological engineering, or eco engineering, is an emerging field in the study of integrating ecology and engineering, concerned with the design, monitoring, and construction of ecosystems. According to Mitsch (1996) 'the design of sustainable ecosystems intends to integrate human society with its natural environment for the benefit of both'. Eco engineering emerged as a new idea in the early 1960s, and the concept has seen refinement since then. As a commonly practiced field of engineering it is relatively novel. Howard Odum (1963) and others first introduced it as 'utilizing natural energy sources as the predominant input to manipulate and control environmental systems'. Mtisch and Jorgensen (1989) were the first to define eco engineering, to provide eco engineering principles and conceptual eco engineering models. Later they refined the definition and increased the number of principles. They suggested that the goals of eco engineering are: a) the restoration of ecosystems that have been substantially disturbed by human activities such as environmental pollution or land disturbance, and b) the development of new sustainable ecosystems that have both human and ecological values. Here a more detailed overview of eco engineering is provided, particularly with regard to how engineers and ecologists are utilizing multi-dimensional computational models to link ecology and engineering, resulting in increasingly successful project implementation. Descriptions are provided pertaining to 1-, 2- and 3-dimensional hydrodynamic models and their use at small- and large-scale applications. A range of conceptual models that have been developed to aid the in the creation of linkages between ecology and engineering are discussed. Finally, several case studies that link ecology and engineering via computational modeling are provided. These studies include localized stream rehabilitation, spawning gravel enhancement on a large river system, and watershed-wide floodplain modeling of
Precise orbit computation and sea surface modeling
NASA Technical Reports Server (NTRS)
Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.
1991-01-01
The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.
Dynamical Properties of Polymers: Computational Modeling
CURRO, JOHN G.; ROTTACH, DANA; MCCOY, JOHN D.
2001-01-01
The free volume distribution has been a qualitatively useful concept by which dynamical properties of polymers, such as the penetrant diffusion constant, viscosity, and glass transition temperature, could be correlated with static properties. In an effort to put this on a more quantitative footing, we define the free volume distribution as the probability of finding a spherical cavity of radius R in a polymer liquid. This is identical to the insertion probability in scaled particle theory, and is related to the chemical potential of hard spheres of radius R in a polymer in the Henry's law limit. We used the Polymer Reference Interaction Site Model (PRISM) theory to compute the free volume distribution of semiflexible polymer melts as a function of chain stiffness. Good agreement was found with the corresponding free volume distributions obtained from MD simulations. Surprisingly, the free volume distribution was insensitive to the chain stiffness, even though the single chain structure and the intermolecular pair correlation functions showed a strong dependence on chain stiffness. We also calculated the free volume distributions of polyisobutylene (PIB) and polyethylene (PE) at 298K and at elevated temperatures from PRISM theory. We found that PIB has more of its free volume distributed in smaller size cavities than for PE at the same temperature.
Computational modeling of ion transport through nanopores.
Modi, Niraj; Winterhalter, Mathias; Kleinekathöfer, Ulrich
2012-10-21
Nanoscale pores are ubiquitous in biological systems while artificial nanopores are being fabricated for an increasing number of applications. Biological pores are responsible for the transport of various ions and substrates between the different compartments of biological systems separated by membranes while artificial pores are aimed at emulating such transport properties. As an experimental method, electrophysiology has proven to be an important nano-analytical tool for the study of substrate transport through nanopores utilizing ion current measurements as a probe for the detection. Independent of the pore type, i.e., biological or synthetic, and objective of the study, i.e., to model cellular processes of ion transport or electrophysiological experiments, it has become increasingly important to understand the dynamics of ions in nanoscale confinements. To this end, numerical simulations have established themselves as an indispensable tool to decipher ion transport processes through biological as well as artificial nanopores. This article provides an overview of different theoretical and computational methods to study ion transport in general and to calculate ion conductance in particular. Potential new improvements in the existing methods and their applications are highlighted wherever applicable. Moreover, representative examples are given describing the ion transport through biological and synthetic nanopores as well as the high selectivity of ion channels. Special emphasis is placed on the usage of molecular dynamics simulations which already have demonstrated their potential to unravel ion transport properties at an atomic level.
Computational modeling of epidural cortical stimulation
NASA Astrophysics Data System (ADS)
Wongsarnpigoon, Amorn; Grill, Warren M.
2008-12-01
Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.; Olariu, Stephen
1995-01-01
The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.
Review of computational thermal-hydraulic modeling
Keefer, R.H.; Keeton, L.W.
1995-12-31
Corrosion of heat transfer tubing in nuclear steam generators has been a persistent problem in the power generation industry, assuming many different forms over the years depending on chemistry and operating conditions. Whatever the corrosion mechanism, a fundamental understanding of the process is essential to establish effective management strategies. To gain this fundamental understanding requires an integrated investigative approach that merges technology from many diverse scientific disciplines. An important aspect of an integrated approach is characterization of the corrosive environment at high temperature. This begins with a thorough understanding of local thermal-hydraulic conditions, since they affect deposit formation, chemical concentration, and ultimately corrosion. Computational Fluid Dynamics (CFD) can and should play an important role in characterizing the thermal-hydraulic environment and in predicting the consequences of that environment,. The evolution of CFD technology now allows accurate calculation of steam generator thermal-hydraulic conditions and the resulting sludge deposit profiles. Similar calculations are also possible for model boilers, so that tests can be designed to be prototypic of the heat exchanger environment they are supposed to simulate. This paper illustrates the utility of CFD technology by way of examples in each of these two areas. This technology can be further extended to produce more detailed local calculations of the chemical environment in support plate crevices, beneath thick deposits on tubes, and deep in tubesheet sludge piles. Knowledge of this local chemical environment will provide the foundation for development of mechanistic corrosion models, which can be used to optimize inspection and cleaning schedules and focus the search for a viable fix.
Predictive Capability Maturity Model for computational modeling and simulation.
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
A Model of Computation for Bit-Level Concurrent Computing and Programming: APEC
NASA Astrophysics Data System (ADS)
Ajiro, Takashi; Tsuchida, Kensei
A concurrent model of computation and a language based on the model for bit-level operation are useful for developing asynchronous and concurrent programs compositionally, which frequently use bit-level operations. Some examples are programs for video games, hardware emulation (including virtual machines), and signal processing. However, few models and languages are optimized and oriented to bit-level concurrent computation. We previously developed a visual programming language called A-BITS for bit-level concurrent programming. The language is based on a dataflow-like model that computes using processes that provide serial bit-level operations and FIFO buffers connected to them. It can express bit-level computation naturally and develop compositionally. We then devised a concurrent computation model called APEC (Asynchronous Program Elements Connection) for bit-level concurrent computation. This model enables precise and formal expression of the process of computation, and a notion of primitive program elements for controlling and operating can be expressed synthetically. Specifically, the model is based on a notion of uniform primitive processes, called primitives, that have three terminals and four ordered rules at most, as well as on bidirectional communication using vehicles called carriers. A new notion is that a carrier moving between two terminals can briefly express some kinds of computation such as synchronization and bidirectional communication. The model's properties make it most applicable to bit-level computation compositionally, since the uniform computation elements are enough to develop components that have practical functionality. Through future application of the model, our research may enable further research on a base model of fine-grain parallel computer architecture, since the model is suitable for expressing massive concurrency by a network of primitives.
Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira
2008-10-01
Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.
Learning Anatomy: Do New Computer Models Improve Spatial Understanding?
ERIC Educational Resources Information Center
Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian
1999-01-01
Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... codes are free of coding errors and produce stable solutions; (v) Conceptual models have undergone...
Graph Partitioning Models for Parallel Computing
Hendrickson, B.; Kolda, T.G.
1999-03-02
Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.
Performance Models for Split-execution Computing Systems
Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena
2016-01-01
Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.
Computational Modeling of Magnetically Actuated Propellant Orientation
NASA Technical Reports Server (NTRS)
Hochstein, John I.
1996-01-01
sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.
Model for personal computer system selection.
Blide, L
1987-12-01
Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.
Idealized Computational Models for Auditory Receptive Fields
Lindeberg, Tony; Friberg, Anders
2015-01-01
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973
Computer modeling of a convective steam superheater
NASA Astrophysics Data System (ADS)
Trojan, Marcin
2015-03-01
Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts
ERIC Educational Resources Information Center
Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.
2005-01-01
The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…
The emerging role of cloud computing in molecular modelling.
Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W
2013-07-01
There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways.
Thermal crosstalk in 3-dimensional RRAM crossbar array
NASA Astrophysics Data System (ADS)
Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming
2015-08-01
High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation.
The first 3-dimensional assemblies of organotin-functionalized polyanions.
Piedra-Garza, Luis Fernando; Reinoso, Santiago; Dickman, Michael H; Sanguineti, Michael M; Kortz, Ulrich
2009-08-21
Reaction of the (CH(3))(2)Sn(2+) electrophile toward trilacunary [A-alpha-XW(9)O(34)](n-) Keggin polytungstates (X = P(V), As(V), Si(IV)) with guanidinium as templating-cation resulted in the isostructural compounds Na[C(NH(2))(3)](2)[{(CH(3))(2)Sn(H(2)O)}(3)(A-alpha-PW(9)O(34))] x 9 H(2)O (1), Na[C(NH(2))(3)](2)[{(CH(3))(2)Sn(H(2)O)}(3)(A-alpha-AsW(9)O(34))] x 8 H(2)O (2) and Na(2)[C(NH(2))(3)](2)[{(CH(3))(2)Sn(H(2)O)}(3)(A-alpha-SiW(9)O(34))] x 10 H(2)O (3). Compounds 1-3 constitute the first 3-dimensional assemblies of organotin-functionalized polyanions, as well as the first example of a dimethyltin-containing tungstosilicate in the case of 3, and they show a similar chiral architecture based on tetrahedrally-arranged {(CH(3))(2)Sn}(3)(A-alpha-XW(9)O(34)) monomeric building-blocks connected via intermolecular Sn-O=W bridges regardless of the size and/or charge of the heteroatom.
Thermal crosstalk in 3-dimensional RRAM crossbar array
Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming
2015-01-01
High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation. PMID:26310537
Thermal crosstalk in 3-dimensional RRAM crossbar array.
Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming
2015-08-27
High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation.
In vitro measurement of muscle volume with 3-dimensional ultrasound.
Delcker, A; Walker, F; Caress, J; Hunt, C; Tegeler, C
1999-05-01
The aim was to test the accuracy of muscle volume measurements with a new 3-dimensional (3-D) ultrasound system, which allows a freehand scanning of the transducer with an improved quality of the ultrasound images and therefore the outlines of the muscles. Five resected cadaveric hand muscles were insonated and the muscle volumes calculated by 3-D reconstructions of the acquired 2-D ultrasound sections. Intra-reader, inter-reader and follow-up variability were calculated, as well as the volume of the muscle tissue measured by water displacement. In the results, 3-D ultrasound and water displacement measurements showed an average deviation of 10.1%; Data of 3-D ultrasound measurements were: intra-reader variability 2.8%; inter-reader variability 2.4% and follow-up variability 2.3%. 3-D measurements of muscle volume are valid and reliable. Serial sonographic measurements of muscle may be able to quantitate changes in muscle volume that occur in disease and recovery.
Evaluation of aerothermal modeling computer programs
NASA Technical Reports Server (NTRS)
Hsieh, K. C.; Yu, S. T.
1987-01-01
Various computer programs based upon the SIMPLE or SIMPLER algorithm were studied and compared for numerical accuracy, efficiency, and grid dependency. Four two-dimensional and one three-dimensional code originally developed by a number of research groups were considered. In general, the accuracy and computational efficieny of these TEACH type programs were improved by modifying the differencing schemes and their solvers. A brief description of each program is given. Error reduction, spline flux and second upwind differencing programs are covered.
Dissemination of computer skills among physicians: the infectious process model.
Quinn, F B; Hokanson, J A; McCracken, M M; Stiernberg, C M
1984-08-01
While the potential utility of computer technology to medicine is often acknowledged, little is known as to the best methods to actually teach physicians about computers. The current variability in physician computer fluency implies there is no accepted minimum required level of computer skills for physicians. Special techniques are needed to instill these skills in the physician and measure their effects within the medical profession. This hypothesis is suggested following the development of a specialized course for the new physician. In a population of physicians where medical computing usage was considered nonexistent, intense interest developed the following exposure to a role model having strong credentials in both medicine and computer science. This produced an atmosphere where there was a perceived benefit in being knowledgeable about the medical computer usage. The subsequent increase in computer systems use was the result of the availability of resources and development of computer skills that could be exchanged among the students and faculty. This growth in computer use is described using the parameters of an infectious process model. While other approaches may also be useful, the infectious process model permits the growth of medical computer usage to be quantitatively described, evaluates specific determinants of use patterns, and allows the future growth of computer utilization in medicine to be predicted.
NASA Astrophysics Data System (ADS)
Georgiev, K.; Zlatev, Z.
2010-11-01
The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.
Integrated Multiscale Modeling of Molecular Computing Devices
Jerzy Bernholc
2011-02-03
will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.
Computer Center: BASIC String Models of Genetic Information Transfer.
ERIC Educational Resources Information Center
Spain, James D., Ed.
1984-01-01
Discusses some of the major genetic information processes which may be modeled by computer program string manipulation, focusing on replication and transcription. Also discusses instructional applications of using string models. (JN)
Ocean Modeling and Visualization on Massively Parallel Computer
NASA Technical Reports Server (NTRS)
Chao, Yi; Li, P. Peggy; Wang, Ping; Katz, Daniel S.; Cheng, Benny N.
1997-01-01
Climate modeling is one of the grand challenges of computational science, and ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change.
Using Computational Simulations to Confront Students' Mental Models
ERIC Educational Resources Information Center
Rodrigues, R.; Carvalho, P. Simeão
2014-01-01
In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…
Investigating College and Graduate Students' Multivariable Reasoning in Computational Modeling
ERIC Educational Resources Information Center
Wu, Hsin-Kai; Wu, Pai-Hsing; Zhang, Wen-Xin; Hsu, Ying-Shao
2013-01-01
Drawing upon the literature in computational modeling, multivariable reasoning, and causal attribution, this study aims at characterizing multivariable reasoning practices in computational modeling and revealing the nature of understanding about multivariable causality. We recruited two freshmen, two sophomores, two juniors, two seniors, four…
A Model for Guiding Undergraduates to Success in Computational Science
ERIC Educational Resources Information Center
Olagunju, Amos O.; Fisher, Paul; Adeyeye, John
2007-01-01
This paper presents a model for guiding undergraduates to success in computational science. A set of integrated, interdisciplinary training and research activities is outlined for use as a vehicle to increase and produce graduates with research experiences in computational and mathematical sciences. The model is responsive to the development of…
Overview of ASC Capability Computing System Governance Model
Doebling, Scott W.
2012-07-11
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
Generate rigorous pyrolysis models for olefins production by computer
Klein, M.T.; Broadbelt, L.J.; Grittman, D.H.
1997-04-01
With recent advances in the automation of the model-building process for large networks of kinetic equations, it may become feasible to generate computer pyrolysis models for naphthas and gas oil feedstocks. The potential benefit of a rigorous mechanistic model for these relatively complex liquid feedstocks is great, due to diverse characterizations and yield spectrums. An ethane pyrolysis example is used to illustrate the computer generation of reaction mechanism models.
A computational model of the human hand 93-ERI-053
Hollerbach, K.; Axelrod, T.
1996-03-01
The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.
Crossover from 2-dimensional to 3-dimensional aggregations of clusters on square lattice substrates
NASA Astrophysics Data System (ADS)
Cheng, Yi; Zhu, Yu-Hong; Pan, Qi-Fa; Yang, Bo; Tao, Xiang-Ming; Ye, Gao-Xiang
2015-11-01
A Monte Carlo study on the crossover from 2-dimensional to 3-dimensional aggregations of clusters is presented. Based on the traditional cluster-cluster aggregation (CCA) simulation, a modified growth model is proposed. The clusters (including single particles and their aggregates) diffuse with diffusion step length l (1 ≤ l ≤ 7) and aggregate on a square lattice substrate. If the number of particles contained in a cluster is larger than a critical size sc, the particles at the edge of the cluster have a possibility to jump onto the upper layer, which results in the crossover from 2-dimensional to 3-dimensional aggregations. Our simulation results are in good agreement with the experimental findings. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374082 and 11074215), the Science Foundation of Zhejiang Province Department of Education, China (Grant No. Y201018280), the Fundamental Research Funds for Central Universities, China (Grant No. 2012QNA3010), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20100101110005).
Rouwkema, Jeroen; de Boer, Jan; Van Blitterswijk, Clemens A
2006-09-01
To engineer tissues with clinically relevant dimensions, one must overcome the challenge of rapidly creating functional blood vessels to supply cells with oxygen and nutrients and to remove waste products. We tested the hypothesis that endothelial cells, cocultured with osteoprogenitor cells, can organize into a prevascular network in vitro. When cultured in a spheroid coculture model with human mesenchymal stem cells, human umbilical vein endothelial cells (HUVECs) form a 3-dimensional prevascular network within 10 days of in vitro culture. The formation of the prevascular network was promoted by seeding 2% or fewer HUVECs. Moreover, the addition of endothelial cells resulted in a 4-fold upregulation of the osteogenic marker alkaline phosphatase. The addition of mouse embryonic fibroblasts did not result in stabilization of the prevascular network. Upon implantation, the prevascular network developed further and structures including lumen could be seen regularly. However, anastomosis with the host vasculature was limited. We conclude that endothelial cells are able to form a 3-dimensional (3D) prevascular network in vitro in a bone tissue engineering setting. This finding is a strong indication that in vitro prevascularization is a promising strategy to improve implant vascularization in bone tissue engineering.
Ambient temperature modelling with soft computing techniques
Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo
2010-07-15
This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)
3-Dimensional shear wave elastography of breast lesions
Chen, Ya-ling; Chang, Cai; Zeng, Wei; Wang, Fen; Chen, Jia-jian; Qu, Ning
2016-01-01
Abstract Color patterns of 3-dimensional (3D) shear wave elastography (SWE) is a promising method in differentiating tumoral nodules recently. This study was to evaluate the diagnostic accuracy of color patterns of 3D SWE in breast lesions, with special emphasis on coronal planes. A total of 198 consecutive women with 198 breast lesions (125 malignant and 73 benign) were included, who underwent conventional ultrasound (US), 3D B-mode, and 3D SWE before surgical excision. SWE color patterns of Views A (transverse), T (sagittal), and C (coronal) were determined. Sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated. Distribution of SWE color patterns was significantly different between malignant and benign lesions (P = 0.001). In malignant lesions, “Stiff Rim” was significantly more frequent in View C (crater sign, 60.8%) than in View A (51.2%, P = 0.013) and View T (54.1%, P = 0.035). AUC for combination of “Crater Sign” and conventional US was significantly higher than View A (0.929 vs 0.902, P = 0.004) and View T (0.929 vs 0.907, P = 0.009), and specificity significantly increased (90.4% vs 78.1%, P = 0.013) without significant change in sensitivity (85.6% vs 88.0%, P = 0.664) as compared with conventional US. In conclusion, combination of conventional US with 3D SWE color patterns significantly increased diagnostic accuracy, with “Crater Sign” in coronal plane of the highest value. PMID:27684820
A new preclinical 3-dimensional agarose colony formation assay.
Kajiwara, Yoshinori; Panchabhai, Sonali; Levin, Victor A
2008-08-01
The evaluation of new drug treatments and combination treatments for gliomas and other cancers requires a robust means to interrogate wide dose ranges and varying times of drug exposure without stain-inactivation of the cells (colonies). To this end, we developed a 3-dimensional (3D) colony formation assay that makes use of GelCount technology, a new cell colony counter for gels and soft agars. We used U251MG, SNB19, and LNZ308 glioma cell lines and MiaPaCa pancreas adenocarcinoma and SW480 colon adenocarcinoma cell lines. Colonies were grown in a two-tiered agarose that had 0.7% agarose on the bottom and 0.3% agarose on top. We then studied the effects of DFMO, carboplatin, and SAHA over a 3-log dose range and over multiple days of drug exposure. Using GelCount we approximated the area under the curve (AUC) of colony volumes as the sum of colony volumes (microm2xOD) in each plate to calculate IC50 values. Adenocarcinoma colonies were recognized by GelCount scanning at 3-4 days, while it took 6-7 days to detect glioma colonies. The growth rate of MiaPaCa and SW480 cells was rapid, with 100 colonies counted in 5-6 days; glioma cells grew more slowly, with 100 colonies counted in 9-10 days. Reliable log dose versus AUC curves were observed for all drugs studied. In conclusion, the GelCount method that we describe is more quantitative than traditional colony assays and allows precise study of drug effects with respect to both dose and time of exposure using fewer culture plates.
Bringing computational models of bone regeneration to the clinic.
Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans
2015-01-01
Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs.
Computer Modeling and Research in the Classroom
ERIC Educational Resources Information Center
Ramos, Maria Joao; Fernandes, Pedro Alexandrino
2005-01-01
We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…
Integrating Cloud-Computing-Specific Model into Aircraft Design
NASA Astrophysics Data System (ADS)
Zhimin, Tian; Qi, Lin; Guangwen, Yang
Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.
Candidate gene analyses of 3-dimensional dentoalveolar phenotypes in subjects with malocclusion
Weaver, Cole A.; Miller, Steven F.; da Fontoura, Clarissa S. G.; Wehby, George L.; Amendt, Brad A.; Holton, Nathan E.; Allareddy, Veeratrishul; Southard, Thomas E.; Moreno Uribe, Lina M.
2017-01-01
Introduction Genetic studies of malocclusion etiology have identified 4 deleterious mutations in genes, DUSP6, ARHGAP21, FGF23, and ADAMTS1 in familial Class III cases. Although these variants may have large impacts on Class III phenotypic expression, their low frequency (<1%) makes them unlikely to explain most malocclusions. Thus, much of the genetic variation underlying the dentofacial phenotypic variation associated with malocclusion remains unknown. In this study, we evaluated associations between common genetic variations in craniofacial candidate genes and 3-dimensional dentoalveolar phenotypes in patients with malocclusion. Methods Pretreatment dental casts or cone-beam computed tomographic images from 300 healthy subjects were digitized with 48 landmarks. The 3-dimensional coordinate data were submitted to a geometric morphometric approach along with principal component analysis to generate continuous phenotypes including symmetric and asymmetric components of dentoalveolar shape variation, fluctuating asymmetry, and size. The subjects were genotyped for 222 single-nucleotide polymorphisms in 82 genes/loci, and phenotpye-genotype associations were tested via multivariate linear regression. Results Principal component analysis of symmetric variation identified 4 components that explained 68% of the total variance and depicted anteroposterior, vertical, and transverse dentoalveolar discrepancies. Suggestive associations (P < 0.05) were identified with PITX2, SNAI3, 11q22.2-q22.3, 4p16.1, ISL1, and FGF8. Principal component analysis for asymmetric variations identified 4 components that explained 51% of the total variations and captured left-to-right discrepancies resulting in midline deviations, unilateral crossbites, and ectopic eruptions. Suggestive associations were found with TBX1 AJUBA, SNAI3 SATB2, TP63, and 1p22.1. Fluctuating asymmetry was associated with BMP3 and LATS1. Associations for SATB2 and BMP3 with asymmetric variations remained significant
Computer modeling of ORNL storage tank sludge mobilization and mixing
Terrones, G.; Eyler, L.L.
1993-09-01
This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.
Implementing and assessing computational modeling in introductory mechanics
NASA Astrophysics Data System (ADS)
Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.
2012-12-01
Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated with a proctored assignment involving a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation, and the implications for computational instruction in introductory science, technology, engineering, and mathematics (STEM) courses.
Limits on the Power of Some Models of Quantum Computation
NASA Astrophysics Data System (ADS)
Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel
2006-09-01
We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.
Limits on the Power of Some Models of Quantum Computation
NASA Astrophysics Data System (ADS)
Ortiz, Gerardo; Somma, Rolando; Barnum, Howard; Knill, Emanuel
We consider quantum computational models defined via a Lie-algebraic theory. In these models, specified initial states are acted on by Lie-algebraic quantum gates and the expectation values of Lie algebra elements are measured at the end. We show that these models can be efficiently simulated on a classical computer in time polynomial in the dimension of the algebra, regardless of the dimension of the Hilbert space where the algebra acts. Similar results hold for the computation of the expectation value of operators implemented by a gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field Hamiltonians and show that they are efficiently (exactly) solvable by means of a Jacobi-like diagonalization method. Our results generalize earlier ones on fermionic linear optics computation and provide insight into the source of the power of the conventional model of quantum computation.
Computer Aided Modeling and Post Processing with NASTRAN Analysis
NASA Technical Reports Server (NTRS)
Boroughs, R. R.
1984-01-01
Computer aided engineering systems are invaluable tools in performing NASTRAN finite element analysis. These techniques are implemented in both the pre-processing and post-processing phases of the NASTRAN analysis. The finite element model development, or pre-processing phase, was automated with a computer aided modeling program called Supertabl, and the review and interpretation of the results of the NASTRAN analysis, or post-processing phase, was automated with a computer aided plotting program called Output Display. An intermediate program, Nasplot, which was developed in-house, has also helped to cut down on the model checkout time and reduce errors in the model. An interface has been established between the finite element computer aided engineering system and the Learjet computer aided design system whereby data can be transferred back and forth between the two. These systems have significantly improved productivity and the ability to perform NASTRAN analysis in response to product development requests.
Sheth, Ujash; Theodoropoulos, John; Abouali, Jihad
2015-01-01
Recurrent anterior shoulder instability often results from large bony Bankart or Hill-Sachs lesions. Preoperative imaging is essential in guiding our surgical management of patients with these conditions. However, we are often limited to making an attempt to interpret a 3-dimensional (3D) structure using conventional 2-dimensional imaging. In cases in which complex anatomy or bony defects are encountered, this type of imaging is often inadequate. We used 3D printing to produce a solid 3D model of a glenohumeral joint from a young patient with recurrent anterior shoulder instability and complex Bankart and Hill-Sachs lesions. The 3D model from our patient was used in the preoperative planning stages of an arthroscopic Bankart repair and remplissage to determine the depth of the Hill-Sachs lesion and the degree of abduction and external rotation at which the Hill-Sachs lesion engaged. PMID:26759768
Operation of the computer model for microenvironment atomic oxygen exposure
NASA Technical Reports Server (NTRS)
Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.
1995-01-01
A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.
Ground Motion Models and Computer Techniques
1972-04-01
tectonic stress-strain distributions induced by changing the pore witer pressure. A general computer subroutine (TAMEOS) is described which...interactions, material phase changes , and dependence of strength parameters on the thermodynamic state. This report describes improved techniques...stretches, 1^ = An A? (3.11) ij = In X? (3.12) we find X. = X? X? . (3.13) 55 It is shown in Ref. 24 that the rate of change of compression, ö
Computer Modeling for Optical Waveguide Sensors.
1987-12-15
COSATI CODES 18 SUBJECT TERMS (Continue on reverse it necessary and cleritify by DIock numnerl FIEL GRUP SB-GOUP Optical waveguide sensors Computer...reflection. The resultant probe beam transmission may be plotted as a function of changes in the refractive index of the surrounding fluid medium. BASIC...all angles of incidence about the critical angle ecr. It should be noted that N in equation (3) is a function of e, since = sin - l sin 8 , see
A Computational Dual-Process Model of Social Interaction
2014-01-30
to facilitate the building of the computational models of agents, visualized as avatars , which pursue goals that drive their behaviors in social...employed for over 20 years. OMAR was used to facilitate the building of the computational models in which the agents, visualized as avatars , pursue the...overview of the visualization of the scenarios’ human performance models as avatars that portray the social interactions of the individuals involved. 3
A stirling engine computer model for performance calculations
NASA Technical Reports Server (NTRS)
Tew, R.; Jefferies, K.; Miao, D.
1978-01-01
To support the development of the Stirling engine as a possible alternative to the automobile spark-ignition engine, the thermodynamic characteristics of the Stirling engine were analyzed and modeled on a computer. The modeling techniques used are presented. The performance of an existing rhombic-drive Stirling engine was simulated by use of this computer program, and some typical results are presented. Engine tests are planned in order to evaluate this model.
Multiphase Turbulence Modeling for Computational Ship Hydrodynamics
2014-05-30
to the SGS model as bubbles become under-resolved, passing through the numerical Hinze scale. 3 iii. URANS closure modeling by analysis of the...variable density turbulence) for URANS models have been developed and tested a priori for turbulent mass flux and kinetic energy. The iLES...well as established the importance of turbulent mass flux and anisotropy in the wake that has guided the development of URANS closure models. This
Student Models in Computer-Aided Instruction
ERIC Educational Resources Information Center
Self, J. A.
1974-01-01
A proposed student model consisting of a set of programs to represent the student's knowledge state. Teaching proceeds after a comparative evaluation of student and teacher programs, and learning is represented by direct modification of the student model. The advantages of an explicit procedural model are illustrated by considering a program which…
Computational neurorehabilitation: modeling plasticity and learning to predict recovery.
Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas
2016-04-30
Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.
Computational technology of multiscale modeling the gas flows in microchannels
NASA Astrophysics Data System (ADS)
Podryga, V. O.
2016-11-01
The work is devoted to modeling the gas mixture flows in engineering microchannels under conditions of many scales of computational domain. The computational technology of using the multiscale approach combining macro - and microscopic models is presented. At macrolevel the nature of the flow and the external influence on it are considered. As a model the system of quasigasdynamic equations is selected. At microlevel the correction of gasdynamic parameters and the determination of boundary conditions are made. As a numerical model the Newton's equations and the molecular dynamics method are selected. Different algorithm types used for implementation of multiscale modeling are considered. The results of the model problems for separate stages are given.
Ku-Band rendezvous radar performance computer simulation model
NASA Technical Reports Server (NTRS)
Magnusson, H. G.; Goff, M. F.
1984-01-01
All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.
Establishing a Cloud Computing Success Model for Hospitals in Taiwan.
Lian, Jiunn-Woei
2017-01-01
The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.
GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique
NASA Astrophysics Data System (ADS)
Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.
2015-12-01
Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).
Dynamic Stall Computations Using a Zonal Navier-Stokes Model
1988-06-01
COMPUTATIONS USING A ZONAL NAVIER-STOKES MODEL OfOSONA, AUTWOR(S) Conrovd, Jack H. r. __ _ I, ,3 , iOR co T’M( COVERED DATE Of REPORT (Yea, Month Oy) IS PAGE...48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface pressure distributions and integrated...lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational
Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors
NASA Technical Reports Server (NTRS)
Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.
2011-01-01
This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.
Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.; Erdemir, Ahmet; Guess, Trent; Reinbolt, Jeff
2009-05-01
Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.
Lin, Xiaozhen; Liu, Yanpu; Edwards, Sean P
2013-10-01
Our aim was to investigate the potential effect of advancement by bilateral sagittal split osteotomy (BSSO) on the natural position of the head by using 3-dimensional cephalomentric analysis. Seven consecutive patients who had had only BSSO advancement, and had had preoperative and 6-week postoperative cone beam computed tomography (CT) scans, were recruited to this retrospective study. Two variables, SNB and SNC2, were used to indicate the craniomandibular alignment and craniocervical inclination, respectively, in the midsagittal plane. Using 3-dimensional cephalometric analysis software, the SNB and the SNC2 were recorded in volume and measured in the midsagittal plane at 3 independent time-points. The reliability was measured and a paired t test used to assess the significance of differences between the means of SNB and SNC2 before and after operation. The 3-dimensional cephalometric measurement showed good reliability. The SNB was increased as planned in all the mandibles that were advanced, the cervical vertebrae were brought forward after BSSO, and the SNC2 was significantly increased in 6 of the 7 patients. Three-dimensional cephalometric analysis may provide an alternative way of assessing cephalometrics. After BSSO advancement, the natural position of the head changed by increasing the craniocervical inclination in an anteroposterior direction.
Stress analysis in platform-switching implants: a 3-dimensional finite element study.
Pellizzer, Eduardo Piza; Verri, Fellippo Ramos; Falcón-Antenucci, Rosse Mary; Júnior, Joel Ferreira Santiago; de Carvalho, Paulo Sérgio Perri; de Moraes, Sandra Lúcia Dantas; Noritomi, Pedro Yoshito
2012-10-01
The aim of this study was to evaluate the influence of the platform-switching technique on stress distribution in implant, abutment, and peri-implant tissues, through a 3-dimensional finite element study. Three 3-dimensional mandibular models were fabricated using the SolidWorks 2006 and InVesalius software. Each model was composed of a bone block with one implant 10 mm long and of different diameters (3.75 and 5.00 mm). The UCLA abutments also ranged in diameter from 5.00 mm to 4.1 mm. After obtaining the geometries, the models were transferred to the software FEMAP 10.0 for pre- and postprocessing of finite elements to generate the mesh, loading, and boundary conditions. A total load of 200 N was applied in axial (0°), oblique (45°), and lateral (90°) directions. The models were solved by the software NeiNastran 9.0 and transferred to the software FEMAP 10.0 to obtain the results that were visualized through von Mises and maximum principal stress maps. Model A (implants with 3.75 mm/abutment with 4.1 mm) exhibited the highest area of stress concentration with all loadings (axial, oblique, and lateral) for the implant and the abutment. All models presented the stress areas at the abutment level and at the implant/abutment interface. Models B (implant with 5.0 mm/abutment with 5.0 mm) and C (implant with 5.0 mm/abutment with 4.1 mm) presented minor areas of stress concentration and similar distribution pattern. For the cortical bone, low stress concentration was observed in the peri-implant region for models B and C in comparison to model A. The trabecular bone exhibited low stress that was well distributed in models B and C. Model A presented the highest stress concentration. Model B exhibited better stress distribution. There was no significant difference between the large-diameter implants (models B and C).
Modeling Trait Anxiety: From Computational Processes to Personality
Raymond, James G.; Steele, J. Douglas; Seriès, Peggy
2017-01-01
Computational methods are increasingly being applied to the study of psychiatric disorders. Often, this involves fitting models to the behavior of individuals with subclinical character traits that are known vulnerability factors for the development of psychiatric conditions. Anxiety disorders can be examined with reference to the behavior of individuals high in “trait” anxiety, which is a known vulnerability factor for the development of anxiety and mood disorders. However, it is not clear how this self-report measure relates to neural and behavioral processes captured by computational models. This paper reviews emerging computational approaches to the study of trait anxiety, specifying how interacting processes susceptible to analysis using computational models could drive a tendency to experience frequent anxious states and promote vulnerability to the development of clinical disorders. Existing computational studies are described in the light of this perspective and appropriate targets for future studies are discussed. PMID:28167920
Super-Micro Computer Weather Prediction Model
1990-06-01
model equations 2 b. Grid domain and horizontal nesting 5 c. Time integration and outer lateral boundary condition 8 d. Coupling of the model with the...c. Eddy diffusion sensitivity tests 36 4. Domain for Prototype testing 39 5 . Comparison of the Boundary-Layer Parameterizations - -__ With the...including radiation calculations, with other boundary layer work will be presented in section 5 , and the report concludes witb section 6. 2. Model
Computational modeling in cognitive science: a manifesto for change.
Addyman, Caspar; French, Robert M
2012-07-01
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals.
A model for computing at the SSC (Superconducting Super Collider)
Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)
1990-06-01
High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.
Computer Mediated Social Justice: A New Model for Educators.
ERIC Educational Resources Information Center
Tettegah, Sharon
2002-01-01
Introduces a new model for analyzing teachers' conversations in computer-mediate communication (CMC) based on information from Bakhtin (1981), Freire (1993), social identity theory, psychological capital, cultural consciousness, and CMC theoretical frameworks. Considers CMC and human-computer interaction (HCI) to address cultural differences that…
Computer Simulation (Microcultures): An Effective Model for Multicultural Education.
ERIC Educational Resources Information Center
Nelson, Jorge O.
This paper presents a rationale for using high-fidelity computer simulation in planning for and implementing effective multicultural education strategies. Using computer simulation, educators can begin to understand and plan for the concept of cultural sensitivity in delivering instruction. The model promises to emphasize teachers' understanding…
Several Computational Opportunities and Challenges Associated with Climate Change Modeling
Wang, Dali; Post, Wilfred M; Wilson, Bruce E
2010-01-01
One of the key factors in the improved understanding of climate science is the development and improvement of high fidelity climate models. These models are critical for projections of future climate scenarios, as well as for highlighting the areas where further measurement and experimentation are needed for knowledge improvement. In this paper, we focus on several computing issues associated with climate change modeling. First, we review a fully coupled global simulation and a nested regional climate model to demonstrate key design components, and then we explain the underlying restrictions associated with the temporal and spatial scale for climate change modeling. We then discuss the role of high-end computers in climate change sciences. Finally, we explain the importance of fostering regional, integrated climate impact analysis. Although we discuss the computational challenges associated with climate change modeling, and we hope those considerations can also be beneficial to many other modeling research programs involving multiscale system dynamics.
Computational Morphodynamics: A modeling framework to understand plant growth
Chickarmane, Vijay; Roeder, Adrienne H.K.; Tarr, Paul T.; Cunha, Alexandre; Tobin, Cory; Meyerowitz, Elliot M.
2014-01-01
Computational morphodynamics utilizes computer modeling to understand the development of living organisms over space and time. Results from biological experiments are used to construct accurate and predictive models of growth. These models are then used to make novel predictions providing further insight into the processes in question, which can be tested experimentally to either confirm or rule out the validity of the computational models. This review highlights two fundamental issues: (1.) models should span and integrate single cell behavior with tissue development and (2.) the necessity to understand the feedback between mechanics of growth and chemical or molecular signaling. We review different approaches to model plant growth and discuss a variety of model types that can be implemented, with the aim of demonstrating how this methodology can be used, to explore the morphodynamics of plant development. PMID:20192756
Cancer evolution: mathematical models and computational inference.
Beerenwinkel, Niko; Schwarz, Roland F; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.
Computer modeling of tactical high frequency antennas
NASA Astrophysics Data System (ADS)
Gregory, Bobby G., Jr.
1992-06-01
The purpose of this thesis was to compare the performance of three tactical high frequency antennas to be used as possible replacement for the Tactical Data Communications Central (TDCC) antennas. The antennas were modeled using the Numerical Electromagnetics Code, Version 3 (NEC3), and the Eyring Low Profile and Buried Antenna Modeling Program (PAT7) for several different frequencies and ground conditions. The performance was evaluated by comparing gain at the desired takeoff angles, the voltage standing wave ratio of each antenna, and its omni-directional capability. The buried antenna models, the ELPA-302 and horizontal dipole, were most effective when employed over poor ground conditions. The best performance under all conditions tested was demonstrated by the HT-20T. Each of these antennas have tactical advantages and disadvantages and can optimize communications under certain conditions. The selection of the best antenna is situation dependent. An experimental test of these models is recommended to verify the modeling results.
Cancer Evolution: Mathematical Models and Computational Inference
Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804
Computational model for Halorhodopsin photocurrent kinetics
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Stefanescu, Roxana; Talathi, Sachin
2013-03-01
Optogenetics is a rapidly developing novel optical stimulation technique that employs light activated ion channels to excite (using channelrhodopsin (ChR)) or suppress (using halorhodopsin (HR)) impulse activity in neurons with high temporal and spatial resolution. This technique holds enormous potential to externally control activity states in neuronal networks. The channel kinetics of ChR and HR are well understood and amenable for mathematical modeling. Significant progress has been made in recent years to develop models for ChR channel kinetics. To date however, there is no model to mimic photocurrents produced by HR. Here, we report the first model developed for HR photocurrents based on a four-state model of the HR photocurrent kinetics. The model provides an excellent fit (root-mean-square error of 3.1862x10-4, to an empirical profile of experimentally measured HR photocurrents. In combination, mathematical models for ChR and HR photocurrents can provide effective means to design test light based control systems to regulate neural activity, which in turn may have implications for the development of novel light based stimulation paradigms for brain disease control. I would like to thank the University of Florida and the Physics Research Experience for Undergraduates (REU) program, funded through NSF DMR-1156737. This research was also supported through start-up funds provided to Dr. Sachin Talathi
Computational modeling and engineering in pediatric and congenital heart disease
Marsden, Alison L.; Feinstein, Jeffrey A.
2015-01-01
Purpose of review Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single ventricle patients, and provide an overview of emerging areas. Recent Findings Multiscale modeling combining patient specific hemodynamics with reduced order (i.e. mathematically and computationally simplified) circulatory models has become the defacto standard for modeling local hemodynamics and “global” circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods, (e.g. fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally-derived surgical methods for single ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot, (and pulmonary tree), and circulatory support. Summary Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases. PMID:26262579
COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION
In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.
Computational Modeling, Formal Analysis, and Tools for Systems Biology
Bartocci, Ezio; Lió, Pietro
2016-01-01
As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science. PMID:26795950
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Computer models and output, Spartan REM: Appendix B
NASA Technical Reports Server (NTRS)
Marlowe, D. S.; West, E. J.
1984-01-01
A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.
An analysis of symbolic linguistic computing models in decision making
NASA Astrophysics Data System (ADS)
Rodríguez, Rosa M.; Martínez, Luis
2013-01-01
It is common that experts involved in complex real-world decision problems use natural language for expressing their knowledge in uncertain frameworks. The language is inherent vague, hence probabilistic decision models are not very suitable in such cases. Therefore, other tools such as fuzzy logic and fuzzy linguistic approaches have been successfully used to model and manage such vagueness. The use of linguistic information implies to operate with such a type of information, i.e. processes of computing with words (CWW). Different schemes have been proposed to deal with those processes, and diverse symbolic linguistic computing models have been introduced to accomplish the linguistic computations. In this paper, we overview the relationship between decision making and CWW, and focus on symbolic linguistic computing models that have been widely used in linguistic decision making to analyse if all of them can be considered inside of the CWW paradigm.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
A computational model for a regenerator
NASA Technical Reports Server (NTRS)
Gary, J.; Daney, D. E.; Radebaugh, R.
1985-01-01
This paper concerns a numerical model of a regenerator running at very low temperatures. The model consists of the usual three equations for a compressible fluid with an additional equation for a matrix temperature. The main difficulty with the model is the very low Mach number (approximately 1.E-3). The divergence of the velocity is not small, the pressure divergence is small, and the pressure fluctuation in time is not small. An asymptotic expansion based on the bounded derivative method of Kreiss is used to give a reduced model which eliminates acoustic waves. The velocity is then determined by a two-point boundary value problem which does not contain a time derivative. The solution obtained from the reduced system is compared with the numerical solution of the original system.
Predictive Computational Modeling of Chromatin Folding
NASA Astrophysics Data System (ADS)
di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.
In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.
Enhanced absorption cycle computer model. Final report
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.
Computer modeling of high intensity solar cells
NASA Astrophysics Data System (ADS)
Gray, J. L.; Lundstrom, M. S.; Schwartz, R. J.
1987-01-01
The purpose of this program is to provide general analytic support to Sandia National Laboratories' effort to develop high efficiency, high concentration solar cells. This report covers work performed between November 5, 1984, and December 31, 1985, and includes reprints of three papers presented at the 18th IEEE Photovoltaic Specialists' Conference. In the first paper, the factors that presently prevent achieving the predicted theoretical efficiencies (in excess of 30% at concentration) are examined. It is demonstrated, by two-dimensional computer simulations, that these efficiencies might be obtained by improved light trapping techniques and by fabrication of low resistance heteroface contacts. The second paper examines the Rose-Weaver lifetime and surface recombination velocity measurement technique. It is shown that the very small uncertainties in the measured quantities lead to large uncertainties in the computed lifetime and surface recombination velocity. This leads to radically different interpretations of how the recombination is distributed throughout the device, and therefore limits the usefulness of the measurement technique. Design options and constraints of GaAs concentrator cells are examined in the third paper. The effectiveness of various design options is assessed. It is shown that although such design options are of little use in increasing the efficiency of heteroface cells, they can improve the efficiency of shallow junction cells so that it is comparable to that of heteroface cells, In addition, documentation describing the use of both the one- and two-dimensional silicon codes, SCAP1D and SCAP2D, as well as the one-dimensional AlGaAs solar cell simulation code is included.
Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.
ERIC Educational Resources Information Center
Baker, Richard
A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…
Supersonic jet and crossflow interaction: Computational modeling
NASA Astrophysics Data System (ADS)
Hassan, Ez; Boles, John; Aono, Hikaru; Davis, Douglas; Shyy, Wei
2013-02-01
The supersonic jet-in-crossflow problem which involves shocks, turbulent mixing, and large-scale vortical structures, requires special treatment for turbulence to obtain accurate solutions. Different turbulence modeling techniques are reviewed and compared in terms of their performance in predicting results consistent with the experimental data. Reynolds-averaged Navier-Stokes (RANS) models are limited in prediction of fuel structure due to their inability to accurately capture unsteadiness in the flow. Large eddy simulation (LES) is not yet practical due to prohibitively large grid requirement near the wall. Hybrid RANS/LES can offer reasonable compromise between accuracy and efficiency. The hybrid models are based on various approaches such as explicit blending of RANS and LES, detached eddy simulation (DES), and filter-based multi-scale models. In particular, they can be used to evaluate the turbulent Schmidt number modeling techniques used in jet-in-crossflow simulations. Specifically, an adaptive approach can be devised by utilizing the information obtained from the resolved field to help assign the value of turbulent Schmidt number in the sub-filter field. The adaptive approach combined with the multi-scale model improves the results especially when highly refined grids are needed to resolve small structures involved in the mixing process.
Actors: A Model of Concurrent Computation in Distributed Systems.
1985-06-01
RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SYTEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI
Revised OPTSA Model. Volume 2. Computer Program Documentation
1975-06-01
UNGUSSIFIED »gCUWITV CLAltlFICATIOM QW THII Pkal fWtitm 0*l« Cn)«rf« REPORT DOCUMENTATION PAGE P-1111 2 . OOVT ACCCUION NO 4. TITLE rand...SufcrlrUJ REVISED OPTSA MODEL Volume 2 : Computer Program Documentation 7. AUTMOUCt; Lowell Bruce Anderson Jerome Bracken Eleanor L. Schwartz t...ATC SCHOOL MawreiiieY. GALIFORNIA 93940 PAPER P-1111 REVISED OPTSA MODEL Volume 2 : Computer Program Documentation Lowell Bruce Anderson
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Cogeneration computer model assessment: Advanced cogeneration research study
NASA Technical Reports Server (NTRS)
Rosenberg, L.
1983-01-01
Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.
Computational model of miniature pulsating heat pipes
Martinez, Mario J.; Givler, Richard C.
2013-01-01
The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.
Practical Use of Computationally Frugal Model Analysis Methods.
Hill, Mary C; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2016-03-01
Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000's, 10,000 s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.
Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.
De Benedetti, Pier G; Fanelli, Francesca
2010-10-01
Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.
Computer models to study uterine activation at labour.
Sharp, G C; Saunders, P T K; Norman, J E
2013-11-01
Improving our understanding of the initiation of labour is a major aim of modern obstetric research, in order to better diagnose and treat pregnant women in which the process occurs abnormally. In particular, increased knowledge will help us identify the mechanisms responsible for preterm labour, the single biggest cause of neonatal morbidity and mortality. Attempts to improve our understanding of the initiation of labour have been restricted by the inaccessibility of gestational tissues to study during pregnancy and at labour, and by the lack of fully informative animal models. However, computer modelling provides an exciting new approach to overcome these restrictions and offers new insights into uterine activation during term and preterm labour. Such models could be used to test hypotheses about drugs to treat or prevent preterm labour. With further development, an effective computer model could be used by healthcare practitioners to develop personalized medicine for patients on a pregnancy-by-pregnancy basis. Very promising work is already underway to build computer models of the physiology of uterine activation and contraction. These models aim to predict changes and patterns in uterine electrical excitation during term labour. There have been far fewer attempts to build computer models of the molecular pathways driving uterine activation and there is certainly scope for further work in this area. The integration of computer models of the physiological and molecular mechanisms that initiate labour will be particularly useful.
Computer modeling of electrical performance of detonators
Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.; Lyons, G.R.
1995-05-01
An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.
Computer Models Simulate Fine Particle Dispersion
NASA Technical Reports Server (NTRS)
2010-01-01
Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.
Computational social network modeling of terrorist recruitment.
Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.
2004-10-01
The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.
Manipulating Heat Flow through 3 Dimensional Nanoscale Phononic Crystal Structure
2014-06-02
Nanoscale Phononic Crystal Structure 5a. CONTRACT NUMBER FA23861214047 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Baowen Li 5d...through computer simulation, how the three dimensional (3D) phononic crystal structures can confine phonon and thus reduce thermal conductivity...phononic crystal (PnC) with spherical pores, which can reduce thermal conductivity of bulk Si by a factor up to 10,000 times at room temperature. The
A Computer Model for Direct Carbonate Fuel Cells
Ding, J.; Patel, P.S.; Farooque, M.; Maru, H.C.
1997-04-01
A 3-D computer model, describing fluid flow, heat and mass transfer, and chemical and electrochemical reaction processes, has been developed for guiding the direct carbonate fuel cell (DFC) stack design. This model is able to analyze the direct internal reforming (DIR) as well as the integrated IIR (indirect internal reforming)-DIR designs. Reasonable agreements between computed and fuel cell tested results, such as flow variations, temperature distributions, cell potentials, and exhaust gas compositions as well as methane conversions, were obtained. Details of the model and comparisons of the modeling results with experimental DFC stack data are presented in the paper.
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
Computational Modeling Develops Ultra-Hard Steel
NASA Technical Reports Server (NTRS)
2007-01-01
Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.
Computer Modeling of Ceramic Boride Composites
2014-11-01
draw conclusions about the structure of the interface (to build geometric and energy models); 3) electronic, which give quantum - mechanical description... mechanical properties of quasi-binary composites taking into account interaction between phases at the interface. Calculation of linear thermal...difficult task in the framework of quantum - mechanical calculations, still a question of dependency of the mechanical characteristics on the temperature
A Computational Model of Spatial Development
NASA Astrophysics Data System (ADS)
Hiraki, Kazuo; Sashima, Akio; Phillips, Steven
Psychological experiments on children's development of spatial knowledge suggest experience at self-locomotion with visual tracking as important factors. Yet, the mechanism underlying development is unknown. We propose a robot that learns to mentally track a target object (i.e., maintaining a representation of an object's position when outside the field-of-view) as a model for spatial development. Mental tracking is considered as prediction of an object's position given the previous environmental state and motor commands, and the current environment state resulting from movement. Following Jordan & Rumelhart's (1992) forward modeling architecture the system consists of two components: an inverse model of sensory input to desired motor commands; and a forward model of motor commands to desired sensory input (goals). The robot was tested on the `three cups' paradigm (where children are required to select the cup containing the hidden object under various movement conditions). Consistent with child development, without the capacity for self-locomotion the robot's errors are self-center based. When given the ability of self-locomotion the robot responds allocentrically.
Computer Model Simulates Air Pollution Over Roads
ERIC Educational Resources Information Center
Environmental Science and Technology, 1972
1972-01-01
A sophisticated modeling technique which predicts pollutant movement accurately and may aid in the design of new freeways is reported. EXPLOR (Examination of Pollution Levels of Roadways) was developed specifically to predict pollutant concentrations in a milewide corridor traversed by a roadway. (BL)
Enabling Grid Computing resources within the KM3NeT computing model
NASA Astrophysics Data System (ADS)
Filippidis, Christos
2016-04-01
KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.
2014-12-07
research efforts in this project focused on the synergistic coupling of: • Computational material science and mechanics of hybrid and light weight polymeric...MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 11 A-l-l: Atomistic Modeling in Polymer Nanocomposite Systems...DETAILED TECHNICAL REPORT 16 A-1: COMPUTATIONAL MATERIAL SCIENCE AND MECHANICS OF HYBRID AND LIGHT WEIGHT POLYMERIC COMPOSITE STRUCTURES 16 A-l-l
Interactive computational models of particle dynamics using virtual reality
Canfield, T.; Diachin, D.; Freitag, L.; Heath, D.; Herzog, J.; Michels, W.
1996-12-31
An increasing number of industrial applications rely on computational models to reduce costs in product design, development, and testing cycles. Here, the authors discuss an interactive environment for the visualization, analysis, and modification of computational models used in industrial settings. In particular, they focus on interactively placing massless, massed, and evaporating particulate matter in computational fluid dynamics applications.they discuss the numerical model used to compute the particle pathlines in the fluid flow for display and analysis. They briefly describe the toolkits developed for vector and scalar field visualization, interactive particulate source placement, and a three-dimensional GUI interface. This system is currently used in two industrial applications, and they present the tools in the context of these applications. They summarize the current state of the project and offer directions for future research.
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix.
Computational challenges in modeling and simulating living matter
NASA Astrophysics Data System (ADS)
Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.; de Castro, Maria Clicia Stelling
2016-12-01
Computational modeling has been successfully used to help scientists understand physical and biological phenomena. Recent technological advances allowthe simulation of larger systems, with greater accuracy. However, devising those systems requires new approaches and novel architectures, such as the use of parallel programming, so that the application can run in the new high performance environments, which are often computer clusters composed of different computation devices, as traditional CPUs, GPGPUs, Xeon Phis and even FPGAs. It is expected that scientists take advantage of the increasing computational power to model and simulate more complex structures and even merge different models into larger and more extensive ones. This paper aims at discussing the challenges of using those devices to simulate such complex systems.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Images as drivers of progress in cardiac computational modelling.
Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A; Bishop, Martin J; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente
2014-08-01
Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Instability phenomena in plasticity: Modelling and computation
NASA Astrophysics Data System (ADS)
Stein, E.; Steinmann, P.; Miehe, C.
1995-12-01
We presented aspects and results related to the broad field of strain localization with special focus on large strain elastoplastic response. Therefore, we first re-examined issues related to the classification of discontinuities and the classical description of localization with a particular emphasis on an Eulerian geometric representation. We touched the problem of mesh objectivity and discussed results of a particular regularization method, namely the micropolar approach. Generally, regularization has to preserve ellipticity and to reflect the underlying physics. For example ductile materials have to be modelled including viscous effects whereas geomaterials are adequately described by the micropolar approach. Then we considered localization phenomena within solids undergoing large strain elastoplastic deformations. Here, we documented the influence of isotropic damage on the failure analysis. Next, the interesting influence of an orthotropic yield condition on the spatial orientation of localized zones has been studied. Finally, we investigated the localization condition for an algorithmic model of finite strain single crystal plasticity.
Computer models for amorphous silicon hydrides
NASA Astrophysics Data System (ADS)
Mousseau, Normand; Lewis, Laurent J.
1990-02-01
A procedure for generating fully coordinated model structures appropriate to hydrogenated amorphous semiconductors is described. The hydrogen is incorporated into an amorphous matrix using a bond-switching process similar to that proposed by Wooten, Winer, and Weaire, which ensures that fourfold coordination is preserved. After each inclusion of hydrogen, the structure is relaxed using a finite-temperature Monte Carlo algorithm. The method is applied to a-Si:H at various hydrogen concentrations. The resulting model structures are found to be in excellent agreement with recent neutron-scattering measurements on a sample with 12 at. % H. Our prescription, which is essentially nonlocal, allows great flexibility and can easily be extended to related systems.
Integrated Multiscale Modeling of Molecular Computing Devices
Gregory Beylkin
2012-03-23
Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.
Computational modeling of nuclear thermal rockets
NASA Technical Reports Server (NTRS)
Peery, Steven D.
1993-01-01
The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.
Computer Modeling of Complete IC Fabrication Process.
1984-01-01
Venson Shaw 10. C. S. Chang 11. Elizabeth Batson 12. Richard Pinto 13. Jacques Beauduoin SPEAKERS: 1. Tayo Akinwande 2. Dimitri Antoniadis 3. Walter...Numerical Model of Polysilicon Emitter Contacts in Bipolar Transistors,’ To be published IEEE Trans. Electron Devices. [34] M. R. Pinto , R. W. Dutton...Received PhD, Spring 1082) Balaji Swaminathan (Received PhD, Spring 1983) Len Mei Research Associate Michael Kump Research Assistant Mark Pinto Research
Computational Modeling of Lipid Metabolism in Yeast
Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda
2016-01-01
Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism. PMID:27730126
Computational Modeling of Lipid Metabolism in Yeast.
Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda
2016-01-01
Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism.
Oxygen and seizure dynamics: II. Computational modeling
Wei, Yina; Ullah, Ghanim; Ingram, Justin
2014-01-01
Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540
A cognitive model for problem solving in computer science
NASA Astrophysics Data System (ADS)
Parham, Jennifer R.
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specific domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the differences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software
Simulation model of load balancing in distributed computing systems
NASA Astrophysics Data System (ADS)
Botygin, I. A.; Popov, V. N.; Frolov, S. G.
2017-02-01
The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.
Minimum-fuel, 3-dimensional flightpath guidance of transfer jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1984-01-01
Minimum fuel, three dimensional flightpaths for commercial jet aircraft are discussed. The theoretical development is divided into two sections. In both sections, the necessary conditions of optimal control, including singular arcs and state constraints, are used. One section treats the initial and final portions (below 10,000 ft) of long optimal flightpaths. Here all possible paths can be derived by generating fields of extremals. Another section treats the complete intermediate length, three dimensional terminal area flightpaths. Here only representative sample flightpaths can be computed. Sufficient detail is provided to give the student of optimal control a complex example of a useful application of optimal control theory.
Computer generation of structural models of amorphous Si and Ge
NASA Astrophysics Data System (ADS)
Wooten, F.; Winer, K.; Weaire, D.
1985-04-01
We have developed and applied a computer algorithm that generates realistic random-network models of a-Si with periodic boundary conditions. These are the first models to have correlation functions that show no serious deiscrepancy with experiment. The algorithm provides a much-needed systematic approach to model construction that can be used to generate models of a large class of amorphous materials.
Computer model of cardiovascular control system responses to exercise
NASA Technical Reports Server (NTRS)
Croston, R. C.; Rummel, J. A.; Kay, F. J.
1973-01-01
Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.
Computational models for the nonlinear analysis of reinforced concrete plates
NASA Technical Reports Server (NTRS)
Hinton, E.; Rahman, H. H. A.; Huq, M. M.
1980-01-01
A finite element computational model for the nonlinear analysis of reinforced concrete solid, stiffened and cellular plates is briefly outlined. Typically, Mindlin elements are used to model the plates whereas eccentric Timoshenko elements are adopted to represent the beams. The layering technique, common in the analysis of reinforced concrete flexural systems, is incorporated in the model. The proposed model provides an inexpensive and reasonably accurate approach which can be extended for use with voided plates.
Computational needs for modelling accelerator components
Hanerfeld, H.
1985-06-01
The particle-in-cell MASK is being used to model several different electron accelerator components. These studies are being used both to design new devices and to understand particle behavior within existing structures. Studies include the injector for the Stanford Linear Collider and the 50 megawatt klystron currently being built at SLAC. MASK is a 2D electromagnetic code which is being used by SLAC both on our own IBM 3081 and on the CRAY X-MP at the NMFECC. Our experience with running MASK illustrates the need for supercomputers to continue work of the kind described. 3 refs., 2 figs.
Paradox of integration-A computational model
NASA Astrophysics Data System (ADS)
Krawczyk, Małgorzata J.; Kułakowski, Krzysztof
2017-02-01
The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.
Revisions to the hydrogen gas generation computer model
Jerrell, J.W.
1992-08-31
Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.
Optimal allocation of computational resources in hydrogeological models under uncertainty
NASA Astrophysics Data System (ADS)
Moslehi, Mahsa; Rajagopal, Ram; de Barros, Felipe P. J.
2015-09-01
Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical-numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the
The computation of standard solar models
NASA Technical Reports Server (NTRS)
Ulrich, Roger K.; Cox, Arthur N.
1991-01-01
Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
A propagation model of computer virus with nonlinear vaccination probability
NASA Astrophysics Data System (ADS)
Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi
2014-01-01
This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.
Computational Neuroscience: Modeling the Systems Biology of Synaptic Plasticity
Kotaleski, Jeanette Hellgren; Blackwell, Kim T.
2016-01-01
Preface Synaptic plasticity is a mechanism proposed to underlie learning and memory. The complexity of the interactions between ion channels, enzymes, and genes involved in synaptic plasticity impedes a deep understanding of this phenomenon. Computer modeling is an approach to investigate the information processing that is performed by signaling pathways underlying synaptic plasticity. In the past few years, new software developments that blend computational neuroscience techniques with systems biology techniques have allowed large-scale, quantitative modeling of synaptic plasticity in neurons. We highlight significant advancements produced by these modeling efforts and introduce promising approaches that utilize advancements in live cell imaging. PMID:20300102
Computational science: shifting the focus from tools to models
Hinsen, Konrad
2014-01-01
Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728
Computational modeling of brain tumors: discrete, continuum or hybrid?
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Deisboeck, Thomas S.
In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.
Computational Modeling of Laser-Cell Biochemical Interactions
2010-12-31
The charts (from upper left, and across to lower right) present the modeled response in the RPE cell of Vitamin C (asc/ ascH ), some reactive oxygen...Husinsky, J., Seiser, B., Edthofer, F., Fekete, B., Farmer , L., and Lund, D., “Ex vivo and computer model study on retinal thermal laser-induced damage
Operation of the computer model for microenvironment solar exposure
NASA Technical Reports Server (NTRS)
Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.
1995-01-01
A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.
Bootstrapping the Lexicon: A Computational Model of Infant Speech Segmentation.
ERIC Educational Resources Information Center
Batchelder, Eleanor Olds
2002-01-01
Details BootLex, a model using distributional cues to build a lexicon and achieving significant segmentation results with English, Japanese, and Spanish; child- and adult-directed speech, and written text; and variations in coding structure. Compares BootLex with three groups of computational models of the infant segmentation process. Discusses…
Computer Modelling of Biological Molecules: Free Resources on the Internet.
ERIC Educational Resources Information Center
Millar, Neil
1996-01-01
Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet.…
Modeling civil violence: An agent-based computational approach
Epstein, Joshua M.
2002-01-01
This article presents an agent-based computational model of civil violence. Two variants of the civil violence model are presented. In the first a central authority seeks to suppress decentralized rebellion. In the second a central authority seeks to suppress communal violence between two warring ethnic groups. PMID:11997450
Computational 3-D Model of the Human Respiratory System
We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...
Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models
ERIC Educational Resources Information Center
Pallant, Amy; Lee, Hee-Sun
2015-01-01
Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…
Industry-Wide Workshop on Computational Turbulence Modeling
NASA Technical Reports Server (NTRS)
Shabbir, Aamir (Compiler)
1995-01-01
This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.
Interrogative Model of Inquiry and Computer-Supported Collaborative Learning.
ERIC Educational Resources Information Center
Hakkarainen, Kai; Sintonen, Matti
2002-01-01
Examines how the Interrogative Model of Inquiry (I-Model), developed for the purposes of epistemology and philosophy of science, could be applied to analyze elementary school students' process of inquiry in computer-supported learning. Suggests that the interrogative approach to inquiry can be productively applied for conceptualizing inquiry in…
Computer Simulation of Small Group Decisions: Model Three.
ERIC Educational Resources Information Center
Hare, A.P.; Scheiblechner, Hartmann
In a test of three computer models to simulate group decisions, data were used from 31 American and Austrian groups on a total of 307 trials. The task for each group was to predict a series of answers of an unknown subject on a value-orientation questionnaire, after being given a sample of his typical responses. The first model, used the mean of…
Computational fluid dynamics modeling for emergency preparedness & response
Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.
1995-07-01
Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.
Revisions to the hydrogen gas generation computer model
Jerrell, J.W.
1992-08-31
Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.
Analysis of computational modeling techniques for complete rotorcraft configurations
NASA Astrophysics Data System (ADS)
O'Brien, David M., Jr.
Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.
3-dimensional imaging system using crystal diffraction lenses
Smither, R.K.
1999-02-09
A device for imaging a plurality of sources of x-ray and gamma-ray radiation is provided. Diffracting crystals are used for focusing the radiation and directing the radiation to a detector which is used for analyzing their addition to collect data as to the location of the source of radiation. A computer is used for converting the data to an image. The invention also provides for a method for imaging x-ray and gamma radiation by supplying a plurality of sources of radiation; focusing the radiation onto a detector; analyzing the focused radiation to collect data as to the type and location of the radiation; and producing an image using the data. 18 figs.
3-dimensional imaging system using crystal diffraction lenses
Smither, Robert K.
1999-01-01
A device for imaging a plurality of sources of x-ray and gamma-ray radiation is provided. Diffracting crystals are used for focussing the radiation and directing the radiation to a detector which is used for analyzing their addition to collect data as to the location of the source of radiation. A computer is used for converting the data to an image. The invention also provides for a method for imaging x-ray and gamma radiation by supplying a plurality of sources of radiation; focussing the radiation onto a detector; analyzing the focused radiation to collect data as to the type and location of the radiation; and producing an image using the data.
A 3-dimensional theory of free electron lasers
Webb, S.D.; Wang, G.; Litvinenko, V.N.
2010-08-23
In this paper, we present an analytical three-dimensional theory of free electron lasers. Under several assumptions, we arrive at an integral equation similar to earlier work carried out by Ching, Kim and Xie, but using a formulation better suited for the initial value problem of Coherent Electron Cooling. We use this model in later papers to obtain analytical results for gain guiding, as well as to develop a complete model of Coherent Electron Cooling.
Applying Performance Models to Understand Data-Intensive Computing Efficiency
2010-05-01
data - intensive computing, cloud computing, analytical modeling, Hadoop, MapReduce , performance and efficiency 1 Introduction “ Data - intensive scalable...the writing of the output data to disk. In systems that replicate data across multiple nodes, such as the GFS [11] and HDFS [3] distributed file...evenly distributed across all participating nodes in the cluster , that nodes are homogeneous, and that each node retrieves its initial input from local
Identification of Computational and Experimental Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.
Computational modelling of memory retention from synapse to behaviour
NASA Astrophysics Data System (ADS)
van Rossum, Mark C. W.; Shippi, Maria
2013-03-01
One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists.
Special Issue: Big data and predictive computational modeling
NASA Astrophysics Data System (ADS)
Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.
2016-09-01
The motivation for this special issue stems from the symposium on ;Big Data and Predictive Computational Modeling; that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.
Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays.
Galati, Domenico F; Abuin, David S; Tauber, Gabriel A; Pham, Andrew T; Pearson, Chad G
2015-12-23
Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs.
Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays
Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.
2016-01-01
ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722
Cerebral Degeneration in Amyotrophic Lateral Sclerosis Revealed by 3-Dimensional Texture Analysis
Maani, Rouzbeh; Yang, Yee-Hong; Emery, Derek; Kalra, Sanjay
2016-01-01
Introduction: Routine MR images do not consistently reveal pathological changes in the brain in ALS. Texture analysis, a method to quantitate voxel intensities and their patterns and interrelationships, can detect changes in images not apparent to the naked eye. Our objective was to evaluate cerebral degeneration in ALS using 3-dimensional texture analysis of MR images of the brain. Methods: In a case-control design, voxel-based texture analysis was performed on T1-weighted MR images of 20 healthy subjects and 19 patients with ALS. Four texture features, namely, autocorrelation, sum of squares variance, sum average, and sum variance were computed. Texture features were compared between the groups by statistical parametric mapping and correlated with clinical measures of disability and upper motor neuron dysfunction. Results: Texture features were different in ALS in motor regions including the precentral gyrus and corticospinal tracts. To a lesser extent, changes were also found in the thalamus, cingulate gyrus, and temporal lobe. Texture features in the precentral gyrus correlated with disease duration, and in the corticospinal tract they correlated with finger tapping speed. Conclusions: Changes in MR image textures are present in motor and non-motor regions in ALS and correlate with clinical features. Whole brain texture analysis has potential in providing biomarkers of cerebral degeneration in ALS. PMID:27064416
3-Dimensional analysis for class III malocclusion patients with facial asymmetry
Ki, Eun-Jung; Cheon, Hae-Myung; Choi, Eun-Joo; Kwon, Kyung-Hwan
2013-01-01
Objectives The aim of this study is to investigate the correlation between 2-dimensional (2D) cephalometric measurement and 3-dimensional (3D) cone beam computed tomography (CBCT) measurement, and to evaluate the availability of 3D analysis for asymmetry patients. Materials and Methods A total of Twenty-seven patients were evaluated for facial asymmetry by photograph and cephalometric radiograph, and CBCT. The 14 measurements values were evaluated and those for 2D and 3D were compared. The patients were classified into two groups. Patients in group 1 were evaluated for symmetry in the middle 1/3 of the face and asymmetry in the lower 1/3 of the face, and those in group 2 for asymmetry of both the middle and lower 1/3 of the face. Results In group 1, significant differences were observed in nine values out of 14 values. Values included three from anteroposterior cephalometric radiograph measurement values (cant and both body height) and six from lateral cephalometric radiographs (both ramus length, both lateral ramal inclination, and both gonial angles). In group 2, comparison between 2D and 3D showed significant difference in 10 factors. Values included four from anteroposterior cephalometric radiograph measurement values (both maxillary height, both body height) and six from lateral cephalometric radiographs (both ramus length, both lateral ramal inclination, and both gonial angles). Conclusion Information from 2D analysis was inaccurate in several measurements. Therefore, in asymmetry patients, 3D analysis is useful in diagnosis of asymmetry. PMID:24471038
The distribution of particles in the plane dispersed by a simple 3-dimensional diffusion process.
Stockmarr, Anders
2002-11-01
Populations of particles dispersed in the 2-dimensional plane from a single point-source may be grouped as focus expansion patterns, with an exponentially decreasing density, and more diffuse patterns with thicker tails. Exponentially decreasing distributions are often modelled as the result of 2-dimensional diffusion processes acting to disperse the particles, while thick-tailed distributions tend to be modelled by purely descriptive distributions. Models based on the Cauchy distribution have been suggested, but these have not been related to diffusion modelling. However, the distribution of particles dispersed from a point source by a 3-dimensional Brownian motion that incorporates a constant drift, under the condition that the particle starts at a given height and is stopped when it reaches the xy plane (zero height) may be shown to result in both slim-tailed exponentially decreasing densities, and thick-tailed polynomially decreasing densities with infinite mean travel distance from the source, depending on parameter values. The drift in the third coordinate represents gravitation, while the drift in the first and second represents a (constant) wind. Conditions for the density having exponentially decreasing tails is derived in terms of gravitation and wind, with a special emphasis on applications to light-weighted particles such as fungal spores.
Methodology for characterizing modeling and discretization uncertainties in computational simulation
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts
Practical Use of Computationally Frugal Model Analysis Methods
Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; ...
2015-03-21
Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less
The GOURD model of human-computer interaction
Goldbogen, G.
1996-12-31
This paper presents a model, the GOURD model, that can be used to measure the goodness of {open_quotes}interactivity{close_quotes} of an interface design and qualifies how to improve the design. The GOURD model describes what happens to the computer and to the human during a human-computer interaction. Since the interaction is generally repeated, the traversal of the model repeatedly is similar to a loop programming structure. Because the model measures interaction over part or all of the application, it can also be used as a classifier of the part or the whole application. But primarily, the model is used as a design guide and a predictor of effectiveness.
Biological networks 101: computational modeling for molecular biologists.
Scholma, Jetse; Schivo, Stefano; Urquidi Camacho, Ricardo A; van de Pol, Jaco; Karperien, Marcel; Post, Janine N
2014-01-01
Computational modeling of biological networks permits the comprehensive analysis of cells and tissues to define molecular phenotypes and novel hypotheses. Although a large number of software tools have been developed, the versatility of these tools is limited by mathematical complexities that prevent their broad adoption and effective use by molecular biologists. This study clarifies the basic aspects of molecular modeling, how to convert data into useful input, as well as the number of time points and molecular parameters that should be considered for molecular regulatory models with both explanatory and predictive potential. We illustrate the necessary experimental preconditions for converting data into a computational model of network dynamics. This model requires neither a thorough background in mathematics nor precise data on intracellular concentrations, binding affinities or reaction kinetics. Finally, we show how an interactive model of crosstalk between signal transduction pathways in primary human articular chondrocytes allows insight into processes that regulate gene expression.
Using GPUs to Meet Next Generation Weather Model Computational Requirements
NASA Astrophysics Data System (ADS)
Govett, M.; Hart, L.; Henderson, T.; Middlecoff, J.; Tierney, C.
2008-12-01
Weather prediction goals within the Earth Science Research Laboratory at NOAA require significant increases in model resolution (~1 km) and forecast durations (~60 days) to support expected requirements in 5 years or less. However, meeting these goals will likely require at least 100k dedicated cores. Few systems will exist that could even run such a large problem, much less house a facility that could provide the necessary power and cooling requirements. To meet our goals we are exploring alternative technologies, including Graphics Processing Units (GPU), that could provide significantly more computational performance and reduced power and cooling requirements, at a lower cost than traditional high-performance computing solutions. Our current global numerical weather prediction model, the Flow following finite-volume Isocahedral Model (FIM, http://fim.noaa.gov), is still early in its development but is already demonstrating good fidelity and excellent scalability to 1000s of cores. The icosahedral grid has several complexities not present in more traditional Cartesian grids including polygons with different numbers of sides (five and six) and non-trivial computation of locations of neighboring grid cells. FIM uses an indirect addressing scheme that yields very compact code despite these complexities. We have extracted computational kernels that encompass functions likely to take the most time at higher resolutions including all that have horizontal dependencies. Kernels implement equations for computing anti-diffusive flux-corrected transport across cell edges, calculating forcing terms and time-step differencing, and re-computing time-dependent vertical coordinates. We are extending these kernels to explore performance of GPU-specific optimizations. We will present initial performance results from the computational kernels of the FIM model, as well as the challenges related to porting code with indirect memory references to the NVIDIA GPUs. Results of this
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
Learning from humans: computational modeling of face recognition.
Wallraven, Christian; Schwaninger, Adrian; Bülthoff, Heinrich H
2005-12-01
In this paper, we propose a computational architecture of face recognition based on evidence from cognitive research. Several recent psychophysical experiments have shown that humans process faces by a combination of configural and component information. Using an appearance-based implementation of this architecture based on low-level features and their spatial relations, we were able to model aspects of human performance found in psychophysical studies. Furthermore, results from additional computational recognition experiments show that our framework is able to achieve excellent recognition performance even under large view rotations. Our interdisciplinary study is an example of how results from cognitive research can be used to construct recognition systems with increased performance. Finally, our modeling results also make new experimental predictions that will be tested in further psychophysical studies, thus effectively closing the loop between psychophysical experimentation and computational modeling.
Computational ocean acoustics: Advances in 3D ocean acoustic modeling
NASA Astrophysics Data System (ADS)
Schmidt, Henrik; Jensen, Finn B.
2012-11-01
The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].
DNA computation model to solve 0-1 programming problem.
Zhang, Fengyue; Yin, Zhixiang; Liu, Bo; Xu, Jin
2004-01-01
0-1 programming problem is an important problem in opsearch with very widespread applications. In this paper, a new DNA computation model utilizing solution-based and surface-based methods is presented to solve the 0-1 programming problem. This model contains the major benefits of both solution-based and surface-based methods; including vast parallelism, extraordinary information density and ease of operation. The result, verified by biological experimentation, revealed the potential of DNA computation in solving complex programming problem.
Computational nanomedicine: modeling of nanoparticle-mediated hyperthermal cancer therapy
Kaddi, Chanchala D; Phan, John H; Wang, May D
2016-01-01
Nanoparticle-mediated hyperthermia for cancer therapy is a growing area of cancer nanomedicine because of the potential for localized and targeted destruction of cancer cells. Localized hyperthermal effects are dependent on many factors, including nanoparticle size and shape, excitation wavelength and power, and tissue properties. Computational modeling is an important tool for investigating and optimizing these parameters. In this review, we focus on computational modeling of magnetic and gold nanoparticle-mediated hyperthermia, followed by a discussion of new opportunities and challenges. PMID:23914967
Mathematical modelling in the computer-aided process planning
NASA Astrophysics Data System (ADS)
Mitin, S.; Bochkarev, P.
2016-04-01
This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.
Computation of eigenfrequencies for equilibrium models including turbulent pressure
NASA Astrophysics Data System (ADS)
Sonoi, T.; Belkacem, K.; Dupret, M.-A.; Samadi, R.; Ludwig, H.-G.; Caffau, E.; Mosser, B.
2017-03-01
Context. The space-borne missions CoRoT and Kepler have provided a wealth of highly accurate data. However, our inability to properly model the upper-most region of solar-like stars prevents us from making the best of these observations. This problem is called "surface effect" and a key ingredient to solve it is turbulent pressure for the computation of both the equilibrium models and the oscillations. While 3D hydrodynamic simulations help to include properly the turbulent pressure in the equilibrium models, the way this surface effect is included in the computation of stellar oscillations is still subject to uncertainties. Aims: We aim at determining how to properly include the effect of turbulent pressure and its Lagrangian perturbation in the adiabatic computation of the oscillations. We also discuss the validity of the gas-gamma model and reduced gamma model approximations, which have been used to compute adiabatic oscillations of equilibrium models including turbulent pressure. Methods: We use a patched model of the Sun with an inner part constructed by a 1D stellar evolution code (CESTAM) and an outer part by the 3D hydrodynamical code (CO5BOLD). Then, the adiabatic oscillations are computed using the ADIPLS code for the gas-gamma and reduced gamma model approximations and with the MAD code imposing the adiabatic condition on an existing time-dependent convection formalism. Finally, all those results are compared to the observed solar frequencies. Results: We show that the computation of the oscillations using the time-dependent convection formalism in the adiabatic limit improves significantly the agreement with the observed frequencies compared to the gas-gamma and reduced gamma model approximations. Of the components of the perturbation of the turbulent pressure, the perturbation of the density and advection term is found to contribute most to the frequency shift. Conclusions: The turbulent pressure is certainly the dominant factor responsible for the