Science.gov

Sample records for 3-dimensional computer model

  1. The Effectiveness of an Interactive 3-Dimensional Computer Graphics Model for Medical Education

    PubMed Central

    Konishi, Takeshi; Tamura, Yoko; Moriguchi, Hiroki

    2012-01-01

    Background Medical students often have difficulty achieving a conceptual understanding of 3-dimensional (3D) anatomy, such as bone alignment, muscles, and complex movements, from 2-dimensional (2D) images. To this end, animated and interactive 3-dimensional computer graphics (3DCG) can provide better visual information to users. In medical fields, research on the advantages of 3DCG in medical education is relatively new. Objective To determine the educational effectiveness of interactive 3DCG. Methods We divided 100 participants (27 men, mean (SD) age 17.9 (0.6) years, and 73 women, mean (SD) age 18.1 (1.1) years) from the Health Sciences University of Mongolia (HSUM) into 3DCG (n = 50) and textbook-only (control) (n = 50) groups. The control group used a textbook and 2D images, while the 3DCG group was trained to use the interactive 3DCG shoulder model in addition to a textbook. We conducted a questionnaire survey via an encrypted satellite network between HSUM and Tokushima University. The questionnaire was scored on a 5-point Likert scale from strongly disagree (score 1) to strongly agree (score 5). Results Interactive 3DCG was effective in undergraduate medical education. Specifically, there was a significant difference in mean (SD) scores between the 3DCG and control groups in their response to questionnaire items regarding content (4.26 (0.69) vs 3.85 (0.68), P = .001) and teaching methods (4.33 (0.65) vs 3.74 (0.79), P < .001), but no significant difference in the Web category. Participants also provided meaningful comments on the advantages of interactive 3DCG. Conclusions Interactive 3DCG materials have positive effects on medical education when properly integrated into conventional education. In particular, our results suggest that interactive 3DCG is more efficient than textbooks alone in medical education and can motivate students to understand complex anatomical structures. PMID:23611759

  2. Contributions of the Musculus Uvulae to Velopharyngeal Closure Quantified With a 3-Dimensional Multimuscle Computational Model.

    PubMed

    Inouye, Joshua M; Lin, Kant Y; Perry, Jamie L; Blemker, Silvia S

    2016-02-01

    The convexity of the dorsal surface of the velum is critical for normal velopharyngeal (VP) function and is largely attributed to the levator veli palatini (LVP) and musculus uvulae (MU). Studies have correlated a concave or flat nasal velar surface to symptoms of VP dysfunction including hypernasality and nasal air emission. In the context of surgical repair of cleft palates, the MU has been given relatively little attention in the literature compared with the larger LVP. A greater understanding of the mechanics of the MU will provide insight into understanding the influence of a dysmorphic MU, as seen in cleft palate, as it relates to VP function. The purpose of this study was to quantify the contributions of the MU to VP closure in a computational model. We created a novel 3-dimensional (3D) finite element model of the VP mechanism from magnetic resonance imaging data collected from an individual with healthy noncleft VP anatomy. The model components included the velum, posterior pharyngeal wall (PPW), LVP, and MU. Simulations were based on the muscle and soft tissue mechanical properties from the literature. We found that, similar to previous hypotheses, the MU acts as (i) a space-occupying structure and (ii) a velar extensor. As a space-occupying structure, the MU helps to nearly triple the midline VP contact length. As a velar extensor, the MU acting alone without the LVP decreases the VP distance 62%. Furthermore, activation of the MU decreases the LVP activation required for closure almost 3-fold, from 20% (without MU) to 8% (with MU). Our study suggests that any possible salvaging and anatomical reconstruction of viable MU tissue in a cleft patient may improve VP closure due to its mechanical function. In the absence or dysfunction of MU tissue, implantation of autologous or engineered tissues at the velar midline, as a possible substitute for the MU, may produce a geometric convexity more favorable to VP closure. In the future, more complex models will

  3. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  4. Incorporating 3-dimensional models in online articles

    PubMed Central

    Cevidanes, Lucia H. S.; Ruellasa, Antonio C. O.; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-01-01

    Introduction The aims of this article were to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Methods Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. Results All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article’s online version for viewing and downloading using the reader’s software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader’s software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. Conclusions When submitting manuscripts, authors can

  5. Unification of color postprocessing techniques for 3-dimensional computational mechanics

    NASA Technical Reports Server (NTRS)

    Bailey, Bruce Charles

    1985-01-01

    To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.

  6. Particle trajectory computation on a 3-dimensional engine inlet. Final Report Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kim, J. J.

    1986-01-01

    A 3-dimensional particle trajectory computer code was developed to compute the distribution of water droplet impingement efficiency on a 3-dimensional engine inlet. The computed results provide the essential droplet impingement data required for the engine inlet anti-icing system design and analysis. The droplet trajectories are obtained by solving the trajectory equation using the fourth order Runge-Kutta and Adams predictor-corrector schemes. A compressible 3-D full potential flow code is employed to obtain a cylindrical grid definition of the flowfield on and about the engine inlet. The inlet surface is defined mathematically through a system of bi-cubic parametric patches in order to compute the droplet impingement points accurately. Analysis results of the 3-D trajectory code obtained for an axisymmetric droplet impingement problem are in good agreement with NACA experimental data. Experimental data are not yet available for the engine inlet impingement problem analyzed. Applicability of the method to solid particle impingement problems, such as engine sand ingestion, is also demonstrated.

  7. Using 3-dimensional printing to create presurgical models for endodontic surgery.

    PubMed

    Bahcall, James K

    2014-09-01

    Advances in endodontic surgery--from both a technological and procedural perspective-have been significant over the last 18 years. Although these technologies and procedural enhancements have significantly improved endodontic surgical treatment outcomes, there is still an ongoing challenge of overcoming the limitations of interpreting preoperative 2-dimensional (2-D) radiographic representation of a 3-dimensional (3-D) in vivo surgical field. Cone-beam Computed Tomography (CBCT) has helped to address this issue by providing a 3-D enhancement of the 2-D radiograph. The next logical step to further improve a presurgical case 3-D assessment is to create a surgical model from the CBCT scan. The purpose of this article is to introduce 3-D printing of CBCT scans for creating presurgical models for endodontic surgery. PMID:25197746

  8. 3-dimensional modeling of transcranial magnetic stimulation: Design and application

    NASA Astrophysics Data System (ADS)

    Salinas, Felipe Santiago

    Over the past three decades, transcranial magnetic stimulation (TMS) has emerged as an effective tool for many research, diagnostic and therapeutic applications in humans. TMS delivers highly localized brain stimulations via non-invasive externally applied magnetic fields. This non-invasive, painless technique provides researchers and clinicians a unique tool capable of stimulating both the central and peripheral nervous systems. However, a complete analysis of the macroscopic electric fields produced by TMS has not yet been performed. In this dissertation, we present a thorough examination of the total electric field induced by TMS in air and a realistic head model with clinically relevant coil poses. In the first chapter, a detailed account of TMS coil wiring geometry was shown to provide significant improvements in the accuracy of primary E-field calculations. Three-dimensional models which accounted for the TMS coil's wire width, height, shape and number of turns clearly improved the fit of calculated-to-measured E-fields near the coil body. Detailed primary E-field models were accurate up to the surface of the coil body (within 0.5% of measured values) whereas simple models were often inadequate (up to 32% different from measured). In the second chapter, we addressed the importance of the secondary E-field created by surface charge accumulation during TMS using the boundary element method (BEM). 3-D models were developed using simple head geometries in order to test the model and compare it with measured values. The effects of tissue geometry, size and conductivity were also investigated. Finally, a realistic head model was used to assess the effect of multiple surfaces on the total E-field. We found that secondary E-fields have the greatest impact at areas in close proximity to each tissue layer. Throughout the head, the secondary E-field magnitudes were predominantly between 25% and 45% of the primary E-fields magnitude. The direction of the secondary E

  9. 3-dimensional current collection model. [Of Tethered Satellite System 1

    SciTech Connect

    Hwang, Kai-Shen; Shiah, A.; Wu, S.T.; Stone, N. Alabama, University, Huntsvilll NASA, Marshall Space Flight Center, Huntsville, Ae )

    1992-07-01

    A three-dimensional, time dependent current collection model of a satellite has been developed for the TSS-1 system. The system has been simulated particularly for the Research of Plasma Electrodynamics (ROPE) experiment. The Maxwellian distributed particles with the geomagnetic field effects are applied in this numerical simulation. The preliminary results indicate that a ring current is observed surrounding the satellite in the equatorial plane. This ring current is found between the plasma sheath and the satellite surface and is oscillating with a time scale of approximately 1 microsec. This is equivalent to the electron plasma frequency. An hour glass shape of electron distribution was observed when the viewing direction is perpendicular to the equatorial plane. This result is consistent with previous findings from Linson (1969) and Antoniades et al. (1990). Electrons that are absorbed by the satellite are limited from the background ionosphere as indicated by Parker and Murphy (1967). 6 refs.

  10. 3-dimensional orthodontics visualization system with dental study models and orthopantomograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ong, S. H.; Foong, K. W. C.; Dhar, T.

    2005-04-01

    The aim of this study is to develop a system that provides 3-dimensional visualization of orthodontic treatments. Dental plaster models and corresponding orthopantomogram (dental panoramic tomogram) are first digitized and fed into the system. A semi-auto segmentation technique is applied to the plaster models to detect the dental arches, tooth interstices and gum margins, which are used to extract individual crown models. 3-dimensional representation of roots, generated by deforming generic tooth models with orthopantomogram using radial basis functions, is attached to corresponding crowns to enable visualization of complete teeth. An optional algorithm to close the gaps between deformed roots and actual crowns by using multi-quadratic radial basis functions is also presented, which is capable of generating smooth mesh representation of complete 3-dimensional teeth. User interface is carefully designed to achieve a flexible system with as much user friendliness as possible. Manual calibration and correction is possible throughout the data processing steps to compensate occasional misbehaviors of automatic procedures. By allowing the users to move and re-arrange individual teeth (with their roots) on a full dentition, this orthodontic visualization system provides an easy and accurate way of simulation and planning of orthodontic treatment. Its capability of presenting 3-dimensional root information with only study models and orthopantomogram is especially useful for patients who do not undergo CT scanning, which is not a routine procedure in most orthodontic cases.

  11. 3-Dimensional Marine CSEM Modeling by Employing TDFEM with Parallel Solvers

    NASA Astrophysics Data System (ADS)

    Wu, X.; Yang, T.

    2013-12-01

    In this paper, parallel fulfillment is developed for forward modeling of the 3-Dimensional controlled source electromagnetic (CSEM) by using time-domain finite element method (TDFEM). Recently, a greater attention rises on research of hydrocarbon (HC) reservoir detection mechanism in the seabed. Since China has vast ocean resources, seeking hydrocarbon reservoirs become significant in the national economy. However, traditional methods of seismic exploration shown a crucial obstacle to detect hydrocarbon reservoirs in the seabed with a complex structure, due to relatively high acquisition costs and high-risking exploration. In addition, the development of EM simulations typically requires both a deep knowledge of the computational electromagnetics (CEM) and a proper use of sophisticated techniques and tools from computer science. However, the complexity of large-scale EM simulations often requires large memory because of a large amount of data, or solution time to address problems concerning matrix solvers, function transforms, optimization, etc. The objective of this paper is to present parallelized implementation of the time-domain finite element method for analysis of three-dimensional (3D) marine controlled source electromagnetic problems. Firstly, we established a three-dimensional basic background model according to the seismic data, then electromagnetic simulation of marine CSEM was carried out by using time-domain finite element method, which works on a MPI (Message Passing Interface) platform with exact orientation to allow fast detecting of hydrocarbons targets in ocean environment. To speed up the calculation process, SuperLU of an MPI (Message Passing Interface) version called SuperLU_DIST is employed in this approach. Regarding the representation of three-dimension seabed terrain with sense of reality, the region is discretized into an unstructured mesh rather than a uniform one in order to reduce the number of unknowns. Moreover, high-order Whitney

  12. Comparison of nonnavigated and 3-dimensional image-based computer navigated balloon kyphoplasty.

    PubMed

    Sembrano, Jonathan N; Yson, Sharon C; Polly, David W; Ledonio, Charles Gerald T; Nuckley, David J; Santos, Edward R G

    2015-01-01

    Balloon kyphoplasty is a common treatment for osteoporotic and pathologic compression fractures. Advantages include minimal tissue disruption, quick recovery, pain relief, and in some cases prevention of progressive sagittal deformity. The benefit of image-based navigation in kyphoplasty has not been established. The goal of this study was to determine whether there is a difference between fluoroscopy-guided balloon kyphoplasty and 3-dimensional image-based navigation in terms of needle malposition rate, cement leakage rate, and radiation exposure time. The authors compared navigated and nonnavigated needle placement in 30 balloon kyphoplasty procedures (47 levels). Intraoperative 3-dimensional image-based navigation was used for needle placement in 21 cases (36 levels); conventional 2-dimensional fluoroscopy was used in the other 9 cases (11 levels). The 2 groups were compared for rates of needle malposition and cement leakage as well as radiation exposure time. Three of 11 (27%) nonnavigated cases were complicated by a malpositioned needle, and 2 of these had to be repositioned. The navigated group had a significantly lower malposition rate (1 of 36; 3%; P=.04). The overall rate of cement leakage was also similar in both groups (P=.29). Radiation exposure time was similar in both groups (navigated, 98 s/level; nonnavigated, 125 s/level; P=.10). Navigated kyphoplasty procedures did not differ significantly from nonnavigated procedures except in terms of needle malposition rate, where navigation may have decreased the need for needle repositioning.

  13. Use of 3-dimensional computed tomography to detect a barium-masked fish bone causing esophageal perforation.

    PubMed

    Tsukiyama, Atsushi; Tagami, Takashi; Kim, Shiei; Yokota, Hiroyuki

    2014-01-01

    Computed tomography (CT) is useful for evaluating esophageal foreign bodies and detecting perforation. However, when evaluation is difficult owing to the previous use of barium as a contrast medium, 3-dimensional CT may facilitate accurate diagnosis. A 49-year-old man was transferred to our hospital with the diagnosis of esophageal perforation. Because barium had been used as a contrast medium for an esophagram performed at a previous hospital, horizontal CT and esophageal endoscopy could not be able to identify the foreign body or characterize the lesion. However, 3-dimensional CT clearly revealed an L-shaped foreign body and its anatomical relationships in the mediastinum. Accordingly, we removed the foreign body using an upper gastrointestinal endoscope. The foreign body was the premaxillary bone of a sea bream. The patient was discharged without complications.

  14. Computer-Aided Designed, 3-Dimensionally Printed Porous Tissue Bioscaffolds For Craniofacial Soft Tissue Reconstruction

    PubMed Central

    Zopf, David A.; Mitsak, Anna G.; Flanagan, Colleen L.; Wheeler, Matthew; Green, Glenn E.; Hollister, Scott J.

    2016-01-01

    Objectives To determine the potential of integrated image-based Computer Aided Design (CAD) and 3D printing approach to engineer scaffolds for head and neck cartilaginous reconstruction for auricular and nasal reconstruction. Study Design Proof of concept revealing novel methods for bioscaffold production with in vitro and in vivo animal data. Setting Multidisciplinary effort encompassing two academic institutions. Subjects and Methods DICOM CT images are segmented and utilized in image-based computer aided design to create porous, anatomic structures. Bioresorbable, polycaprolactone scaffolds with spherical and random porous architecture are produced using a laser-based 3D printing process. Subcutaneous in vivo implantation of auricular and nasal scaffolds was performed in a porcine model. Auricular scaffolds were seeded with chondrogenic growth factors in a hyaluronic acid/collagen hydrogel and cultured in vitro over 2 months duration. Results Auricular and nasal constructs with several microporous architectures were rapidly manufactured with high fidelity to human patient anatomy. Subcutaneous in vivo implantation of auricular and nasal scaffolds resulted in excellent appearance and complete soft tissue ingrowth. Histologic analysis of in vitro scaffolds demonstrated native appearing cartilaginous growth respecting the boundaries of the scaffold. Conclusions Integrated image-based computer-aided design (CAD) and 3D printing processes generated patient-specific nasal and auricular scaffolds that supported cartilage regeneration. PMID:25281749

  15. A Modular Computer Code for Simulating Reactive Multi-Species Transport in 3-Dimensional Groundwater Systems

    SciTech Connect

    TP Clement

    1999-06-24

    RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in

  16. Finite element modelling of a 3 dimensional dielectrophoretic flow separator device for optimal bioprocessing conditions.

    PubMed

    Fatoyinbo, H O; Hughes, M P

    2004-01-01

    Planar 2-dimensional dielectrophoresis electrode geometries are limited in only being capable of handling fluid volumes ranging from picolitres to hundreds of microliters per hour. A 3-dimensional electrode system has been developed capable of handling significantly larger volumes of fluid. Using finite element modeling the electric field distribution within various bore sizes was realized. From these simulations it is possible to optimize bioprocessing factors influencing the performance of a dielectrophoretic separator. Process calculations have shown that flow-rates of 25ml hr/sup -1/ or more can be attained for the separation of heterogeneous populations of bio-particles based on their dielectric properties.

  17. Experimental Validation of Plastic Mandible Models Produced by a “Low-Cost” 3-Dimensional Fused Deposition Modeling Printer

    PubMed Central

    Maschio, Federico; Pandya, Mirali; Olszewski, Raphael

    2016-01-01

    Background The objective of this study was to investigate the accuracy of 3-dimensional (3D) plastic (ABS) models generated using a low-cost 3D fused deposition modelling printer. Material/Methods Two human dry mandibles were scanned with a cone beam computed tomography (CBCT) Accuitomo device. Preprocessing consisted of 3D reconstruction with Maxilim software and STL file repair with Netfabb software. Then, the data were used to print 2 plastic replicas with a low-cost 3D fused deposition modeling printer (Up plus 2®). Two independent observers performed the identification of 26 anatomic landmarks on the 4 mandibles (2 dry and 2 replicas) with a 3D measuring arm. Each observer repeated the identifications 20 times. The comparison between the dry and plastic mandibles was based on 13 distances: 8 distances less than 12 mm and 5 distances greater than 12 mm. Results The mean absolute difference (MAD) was 0.37 mm, and the mean dimensional error (MDE) was 3.76%. The MDE decreased to 0.93% for distances greater than 12 mm. Conclusions Plastic models generated using the low-cost 3D printer UPplus2® provide dimensional accuracies comparable to other well-established rapid prototyping technologies. Validated low-cost 3D printers could represent a step toward the better accessibility of rapid prototyping technologies in the medical field. PMID:27003456

  18. Surgical Classification of the Mandibular Deformity in Craniofacial Microsomia Using 3-Dimensional Computed Tomography

    PubMed Central

    Swanson, Jordan W.; Mitchell, Brianne T.; Wink, Jason A.; Taylor, Jesse A.

    2016-01-01

    Background: Grading systems of the mandibular deformity in craniofacial microsomia (CFM) based on conventional radiographs have shown low interrater reproducibility among craniofacial surgeons. We sought to design and validate a classification based on 3-dimensional CT (3dCT) that correlates features of the deformity with surgical treatment. Methods: CFM mandibular deformities were classified as normal (T0), mild (hypoplastic, likely treated with orthodontics or orthognathic surgery; T1), moderate (vertically deficient ramus, likely treated with distraction osteogenesis; T2), or severe (ramus rudimentary or absent, with either adequate or inadequate mandibular body bone stock; T3 and T4, likely treated with costochondral graft or free fibular flap, respectively). The 3dCT face scans of CFM patients were randomized and then classified by craniofacial surgeons. Pairwise agreement and Fleiss' κ were used to assess interrater reliability. Results: The 3dCT images of 43 patients with CFM (aged 0.1–15.8 years) were reviewed by 15 craniofacial surgeons, representing an average 15.2 years of experience. Reviewers demonstrated fair interrater reliability with average pairwise agreement of 50.4 ± 9.9% (Fleiss' κ = 0.34). This represents significant improvement over the Pruzansky–Kaban classification (pairwise agreement, 39.2%; P = 0.0033.) Reviewers demonstrated substantial interrater reliability with average pairwise agreement of 83.0 ± 7.6% (κ = 0.64) distinguishing deformities requiring graft or flap reconstruction (T3 and T4) from others. Conclusion: The proposed classification, designed for the era of 3dCT, shows improved consensus with respect to stratifying the severity of mandibular deformity and type of operative management. PMID:27104097

  19. Role of the Animator in the Generation of 3-Dimensional Computer Generated Animation.

    ERIC Educational Resources Information Center

    Wedge, John Christian

    This master's thesis investigates the relationship between the traditional animator and the computer as computer animation systems allow them to apply traditional skills with a high degree of success. The advantages and disadvantages of traditional animation as a medium for expressing motion and character are noted, and it is argued that the…

  20. Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies

    NASA Technical Reports Server (NTRS)

    Reyhner, T. A.

    1982-01-01

    An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.

  1. Investigation of Asymmetries in Inductively Coupled Plasma Etching Reactors Using a 3-Dimensional Hybrid Model

    NASA Astrophysics Data System (ADS)

    Kushner, Mark J.; Grapperhaus, Michael J.

    1996-10-01

    Inductively Coupled Plasma (ICP) reactors have the potential for scaling to large area substrates while maintaining azimuthal symmetry or side-to-side uniformity across the wafer. Asymmetric etch properties in these devices have been attributed to transmission line properties of the coil, internal structures (such as wafer clamps) and non-uniform gas injection or pumping. To investigate the origins of asymmetric etch properties, a 3-dimensional hybrid model has been developed. The hybrid model contains electromagnetic, electric circuit, electron energy equation, and fluid modules. Continuity and momentum equations are solved in the fluid module along with Poisson's equation. We will discuss results for ion and radical flux uniformity to the substrate while varying the transmission line characteristics of the coil, symmetry of gas inlets/pumping, and internal structures. Comparisons will be made to expermental measurements of etch rates. ^*Work supported by SRC, NSF, ARPA/AFOSR and LAM Research.

  2. 3-Dimensional Geologic Modeling Applied to the Structural Characterization of Geothermal Systems: Astor Pass, Nevada, USA

    SciTech Connect

    Siler, Drew L; Faulds, James E; Mayhew, Brett

    2013-04-16

    Geothermal systems in the Great Basin, USA, are controlled by a variety of fault intersection and fault interaction areas. Understanding the specific geometry of the structures most conducive to broad-scale geothermal circulation is crucial to both the mitigation of the costs of geothermal exploration (especially drilling) and to the identification of geothermal systems that have no surface expression (blind systems). 3-dimensional geologic modeling is a tool that can elucidate the specific stratigraphic intervals and structural geometries that host geothermal reservoirs. Astor Pass, NV USA lies just beyond the northern extent of the dextral Pyramid Lake fault zone near the boundary between two distinct structural domains, the Walker Lane and the Basin and Range, and exhibits characteristics of each setting. Both northwest-striking, left-stepping dextral faults of the Walker Lane and kinematically linked northerly striking normal faults associated with the Basin and Range are present. Previous studies at Astor Pass identified a blind geothermal system controlled by the intersection of west-northwest and north-northwest striking dextral-normal faults. Wells drilled into the southwestern quadrant of the fault intersection yielded 94°C fluids, with geothermometers suggesting a maximum reservoir temperature of 130°C. A 3-dimensional model was constructed based on detailed geologic maps and cross-sections, 2-dimensional seismic data, and petrologic analysis of the cuttings from three wells in order to further constrain the structural setting. The model reveals the specific geometry of the fault interaction area at a level of detail beyond what geologic maps and cross-sections can provide.

  3. Virtual model surgery and wafer fabrication using 2-dimensional cephalograms, 3-dimensional virtual dental models, and stereolithographic technology.

    PubMed

    Choi, Jin-Young; Hwang, Jong-Min; Baek, Seung-Hak

    2012-02-01

    Although several 3-dimensional virtual model surgery (3D-VMS) programs have been introduced to reduce time-consuming manual laboratory steps and potential errors, these programs still require 3D-computed tomography (3D-CT) data and involve complex computerized maneuvers. Because it is difficult to take 3D-CTs for all cases, a new VMS program using 2D lateral and posteroanterior cephalograms and 3D virtual dental models (2.5D-VMS program; 3Txer version 2.5, Orapix, Seoul, Korea) has recently been introduced. The purposes of this article were to present the methodology of the 2.5D-VMS program and to verify the accuracy of intermediate surgical wafers fabricated with the stereolithographic technique. Two cases successfully treated using the 2.5D-VMS program are presented. There was no significant difference in the position of upper dentition after surgical movement between 2.5D-VMS and 3D-VMS in 18 samples (less than 0.10 mm, P > .05, Wilcoxon-signed rank test). The 2.5D-VMS can be regarded as an effective alternative for 3D-VMS for cases in which 3D-CT data are not available.

  4. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  5. Accuracy and reliability of linear measurements using 3-dimensional computed tomographic imaging software for Le Fort I Osteotomy.

    PubMed

    Gaia, Bruno Felipe; Pinheiro, Lucas Rodrigues; Umetsubo, Otávio Shoite; Santos, Oseas; Costa, Felipe Ferreira; Cavalcanti, Marcelo Gusmão Paraíso

    2014-03-01

    Our purpose was to compare the accuracy and reliability of linear measurements for Le Fort I osteotomy using volume rendering software. We studied 11 dried skulls and used cone-beam computed tomography (CT) to generate 3-dimensional images. Linear measurements were based on craniometric anatomical landmarks that were predefined as specifically used for Le Fort I osteotomy, and identified twice each by 2 radiologists, independently, using Dolphin imaging version 11.5.04.35. A third examiner then made physical measurements using digital calipers. There was a significant difference between Dolphin imaging and the gold standard, particularly in the pterygoid process. The largest difference was 1.85mm (LLpPtg L). The mean differences between the physical and the 3-dimensional linear measurements ranged from -0.01 to 1.12mm for examiner 1, and 0 to 1.85mm for examiner 2. Interexaminer analysis ranged from 0.51 to 0.93. Intraexaminer correlation coefficients ranged from 0.81 to 0.96 and 0.57 to 0.92, for examiners 1 and 2, respectively. We conclude that the Dolphin imaging should be used sparingly during Le Fort I osteotomy.

  6. A 3-dimensional Navier-Stokes-Euler code for blunt-body flow computations

    NASA Technical Reports Server (NTRS)

    Li, C. P.

    1985-01-01

    The shock-layer flowfield is obtained with or without viscous and heat-conducting dissipations from the conservative laws of fluid dynamics equations using a shock-fitting implicity finite-difference technique. The governing equations are cast in curvilinear-orthogonal coordinates and transformed to the domain between the shock and the body. Another set of equations is used for the singular coordinate axis, which, together with a cone generator away from the stagnation point, encloses the computation domain. A time-dependent alternating direction implicit factorization technique is applied to integrate the equations with local-time increment until a steady solution is reached. The shock location is updated after the flowfield computation, but the wall conditions are implemented into the implicit procedure. Innovative procedures are introduced to define the initial flowfield, to treat both perfect and equilibrium gases, to advance the solution on a coarse-to-fine grid sequence, and to start viscous flow computations from their corresponding inviscid solutions. The results are obtained from a grid no greater than 28 by 18 by 7 and converged within 300 integration steps. They are of sufficient accuracy to start parabolized Navier-Stokes or Euler calculations beyond the nose region, to compare with flight and wind-tunnel data, and to evaluate conceptual designs of reentry spacecraft.

  7. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    NASA Astrophysics Data System (ADS)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  8. A 3-dimensional DTI MRI-based model of GBM growth and response to radiation therapy.

    PubMed

    Hathout, Leith; Patel, Vishal; Wen, Patrick

    2016-09-01

    Glioblastoma (GBM) is both the most common and the most aggressive intra-axial brain tumor, with a notoriously poor prognosis. To improve this prognosis, it is necessary to understand the dynamics of GBM growth, response to treatment and recurrence. The present study presents a mathematical diffusion-proliferation model of GBM growth and response to radiation therapy based on diffusion tensor (DTI) MRI imaging. This represents an important advance because it allows 3-dimensional tumor modeling in the anatomical context of the brain. Specifically, tumor infiltration is guided by the direction of the white matter tracts along which glioma cells infiltrate. This provides the potential to model different tumor growth patterns based on location within the brain, and to simulate the tumor's response to different radiation therapy regimens. Tumor infiltration across the corpus callosum is simulated in biologically accurate time frames. The response to radiation therapy, including changes in cell density gradients and how these compare across different radiation fractionation protocols, can be rendered. Also, the model can estimate the amount of subthreshold tumor which has extended beyond the visible MR imaging margins. When combined with the ability of being able to estimate the biological parameters of invasiveness and proliferation of a particular GBM from serial MRI scans, it is shown that the model has potential to simulate realistic tumor growth, response and recurrence patterns in individual patients. To the best of our knowledge, this is the first presentation of a DTI-based GBM growth and radiation therapy treatment model. PMID:27572745

  9. A simple and efficient quasi 3-dimensional viscoelastic model and software for simulation of tapping-mode atomic force microscopy

    DOE PAGES

    Solares, Santiago D.

    2015-11-26

    This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less

  10. A simple and efficient quasi 3-dimensional viscoelastic model and software for simulation of tapping-mode atomic force microscopy

    SciTech Connect

    Solares, Santiago D.

    2015-11-26

    This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.

  11. A simple and efficient quasi 3-dimensional viscoelastic model and software for simulation of tapping-mode atomic force microscopy.

    PubMed

    Solares, Santiago D

    2015-01-01

    This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.

  12. A simple and efficient quasi 3-dimensional viscoelastic model and software for simulation of tapping-mode atomic force microscopy

    PubMed Central

    2015-01-01

    Summary This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments. PMID:26734515

  13. 3-Dimensional Modeling of Capacitively and Inductively Coupled Plasma Etching Systems

    NASA Astrophysics Data System (ADS)

    Rauf, Shahid

    2008-10-01

    Low temperature plasmas are widely used for thin film etching during micro and nano-electronic device fabrication. Fluid and hybrid plasma models were developed 15-20 years ago to understand the fundamentals of these plasmas and plasma etching. These models have significantly evolved since then, and are now a major tool used for new plasma hardware design and problem resolution. Plasma etching is a complex physical phenomenon, where inter-coupled plasma, electromagnetic, fluid dynamics, and thermal effects all have a major influence. The next frontier in the evolution of fluid-based plasma models is where these models are able to self-consistently treat the inter-coupling of plasma physics with fluid dynamics, electromagnetics, heat transfer and magnetostatics. We describe one such model in this paper and illustrate its use in solving engineering problems of interest for next generation plasma etcher design. Our 3-dimensional plasma model includes the full set of Maxwell equations, transport equations for all charged and neutral species in the plasma, the Navier-Stokes equation for fluid flow, and Kirchhoff's equations for the lumped external circuit. This model also includes Monte Carlo based kinetic models for secondary electrons and stochastic heating, and can take account of plasma chemistry. This modeling formalism allows us to self-consistently treat the dynamics in commercial inductively and capacitively coupled plasma etching reactors with realistic plasma chemistries, magnetic fields, and reactor geometries. We are also able to investigate the influence of the distributed electromagnetic circuit at very high frequencies (VHF) on the plasma dynamics. The model is used to assess the impact of azimuthal asymmetries in plasma reactor design (e.g., off-center pump, 3D magnetic field, slit valve, flow restrictor) on plasma characteristics at frequencies from 2 -- 180 MHz. With Jason Kenney, Ankur Agarwal, Ajit Balakrishna, Kallol Bera, and Ken Collins.

  14. Porous Media Contamination: 3-Dimensional Visualization and Quantification Using X-Ray Computed Tomography

    NASA Astrophysics Data System (ADS)

    Goldstein, L.; Prasher, S. O.; Ghoshal, S.

    2004-05-01

    Non-aqueous phase liquids (NAPLs), if spilled into the subsurface, will migrate downward, and a significant fraction will become trapped in the soil matrix. These trapped NAPL globules partition into the water and/or vapor phase, and serve as continuous sources of contamination (e.g. source zones). At present, the presence of NAPL in the subsurface is typically inferred from chemical analysis data. There are no accepted methodologies or protocols available for the direct characterization of NAPLs in the subsurface. Proven and cost-effective methodologies are needed to allow effective implementation of remediation technologies at NAPL contaminated sites. X-ray Computed Tomography (CT) has the potential to non-destructively quantify NAPL mass and distribution in soil cores due to this technology's ability to detect small atomic density differences of solid, liquid, gas, and NAPL phases present in a representative volume element. We have demonstrated that environmentally significant NAPLs, such as gasoline and other oil products, chlorinated solvents, and PCBs possess a characteristic and predictable X-ray attenuation coefficient that permits their quantification in porous media at incident beam energies, typical of medical and industrial X-ray CT scanners. As part of this study, methodologies were developed for generating and analyzing X-ray CT data for the study of NAPLs in natural porous media. Columns of NAPL-contaminated soils were scanned, flushed with solvents and water to remove entrapped NAPL, and re-scanned. X-ray CT data was analyzed to obtain numerical arrays of soil porosity, NAPL saturation, and NAPL volume at a spatial resolution of 1 mm. This methodology was validated using homogeneous and heterogeneous soil columns with known quantities of gasoline and tetrachloroethylene. NAPL volumes computed using X-ray CT data was compared with known volumes from volume balance calculations. Error analysis revealed that in a 5 cm long and 2.5 cm diameter soil

  15. An Explicit 3-Dimensional Model for Reactive Transport of Nitrogen in Tile Drained Fields

    NASA Astrophysics Data System (ADS)

    Hill, D. J.; Valocchi, A. J.; Hudson, R. J.

    2001-12-01

    Recently, there has been increased interest in nitrate contamination of groundwater in the Midwest because of its link to surface water eutrophication, especially in the Gulf of Mexico. The vast majority of this nitrate is the product of biologically mediated transformation of fertilizers containing ammonia in the vadose zone of agricultural fields. For this reason, it is imperative that mathematical models, which can serve as useful tools to evaluate both the impact of agricultural fertilizer applications and nutrient-reducing management practices, are able to specifically address transport in the vadose zone. The development of a 3-dimensional explicit numerical model to simulate the movement and transformation of nitrogen species through the subsurface on the scale of an individual farm plot will be presented. At this scale, nitrogen fate and transport is controlled by a complex coupling among hydrologic, agricultural and biogeochemical processes. The nitrogen model is a component of a larger modeling effort that focuses upon conditions typical of those found in agricultural fields in Illinois. These conditions include non-uniform, multi-dimensional, transient flow in both saturated and unsaturated zones, geometrically complex networks of tile drains, coupled surface-subsurface-tile flow, and dynamic levels of dissolved oxygen in the soil profile. The advection-dispersion-reaction equation is solved using an operator-splitting approach, which is a flexible and straightforward strategy. Advection is modeled using a total variation diminishing scheme, dispersion is modeled using an alternating direction explicit method, and reactions are modeled using rate law equations. The model's stability and accuracy will be discussed, and test problems will be presented.

  16. 3-Dimensional modeling of large diameter wire array high intensity K-shell radiation sources.

    SciTech Connect

    Giuliani, J. L.; Waisman, Eduardo Mario; Chittenden, Jeremy Paul; Jennings, Christopher A.; Ampleford, David J.; Yu, Edmund P.; Thornhill, Joseph W.; Cuneo, Michael Edward; Coverdale, Christine Anne; Jones, Brent Manley; Hansen, Stephanie B.

    2010-06-01

    Large diameter nested wire array z-pinches imploded on the Z-generator at Sandia National Laboratories have been used extensively to generate high intensity K-shell radiation. Large initial radii are required to obtain the high implosion velocities needed to efficiently radiate in the K-shell. This necessitates low wire numbers and large inter-wire gaps which introduce large azimuthal non-uniformities. Furthermore, the development of magneto-Rayleigh-Taylor instabilities during the implosion are known to generate large axial non-uniformity These effects motivate the complete, full circumference 3-dimensional modeling of these systems. Such high velocity implosions also generate large voltages, which increase current losses in the power feed and limit the current delivery to these loads. Accurate representation of the generator coupling is therefore required to reliably represent the energy delivered to, and the power radiated from these sources. We present 3D-resistive MHD calculations of the implosion and stagnation of a variety of large diameter stainless steel wire arrays (hv {approx} 6.7 keV), imploded on the Z-generator both before and after its refurbishment. Use of a tabulated K-shell emission model allows us to compare total and K-shell radiated powers to available experimental measurements. Further comparison to electrical voltage and current measurements allows us to accurately assess the power delivered to these loads. These data allow us to begin to constrain and validate our 3D MHD calculations, providing insight into ways in which these sources may be further optimized.

  17. A Geometric Modelling Approach to Determining the Best Sensing Coverage for 3-Dimensional Acoustic Target Tracking in Wireless Sensor Networks

    PubMed Central

    Pashazadeh, Saeid; Sharifi, Mohsen

    2009-01-01

    Existing 3-dimensional acoustic target tracking methods that use wired/wireless networked sensor nodes to track targets based on four sensing coverage do not always compute the feasible spatio-temporal information of target objects. To investigate this discrepancy in a formal setting, we propose a geometric model of the target tracking problem alongside its equivalent geometric dual model that is easier to solve. We then study and prove some properties of dual model by exploiting its relationship with algebra. Based on these properties, we propose a four coverage axis line method based on four sensing coverage and prove that four sensing coverage always yields two dual correct answers; usually one of them is infeasible. By showing that the feasible answer can be only sometimes identified by using a simple time test method such as the one proposed by ourselves, we prove that four sensing coverage fails to always yield the feasible spatio-temporal information of a target object. We further prove that five sensing coverage always gives the feasible position of a target object under certain conditions that are discussed in this paper. We propose three extensions to four coverage axis line method, namely, five coverage extent point method, five coverage extended axis lines method, and five coverage redundant axis lines method. Computation and time complexities of all four proposed methods are equal in the worst cases as well as on average being equal to Θ(1) each. Proposed methods and proved facts about capabilities of sensing coverage degree in this paper can be used in all other methods of acoustic target tracking like Bayesian filtering methods. PMID:22423198

  18. Application of 3-Dimensional Printing Technology to Construct an Eye Model for Fundus Viewing Study

    PubMed Central

    Li, Xinhua; Gao, Zhishan; Yuan, Dongqing; Liu, Qinghuai

    2014-01-01

    Objective To construct a life-sized eye model using the three-dimensional (3D) printing technology for fundus viewing study of the viewing system. Methods We devised our schematic model eye based on Navarro's eye and redesigned some parameters because of the change of the corneal material and the implantation of intraocular lenses (IOLs). Optical performance of our schematic model eye was compared with Navarro's schematic eye and other two reported physical model eyes using the ZEMAX optical design software. With computer aided design (CAD) software, we designed the 3D digital model of the main structure of the physical model eye, which was used for three-dimensional (3D) printing. Together with the main printed structure, polymethyl methacrylate(PMMA) aspherical cornea, variable iris, and IOLs were assembled to a physical eye model. Angle scale bars were glued from posterior to periphery of the retina. Then we fabricated other three physical models with different states of ammetropia. Optical parameters of these physical eye models were measured to verify the 3D printing accuracy. Results In on-axis calculations, our schematic model eye possessed similar size of spot diagram compared with Navarro's and Bakaraju's model eye, much smaller than Arianpour's model eye. Moreover, the spherical aberration of our schematic eye was much less than other three model eyes. While in off- axis simulation, it possessed a bit higher coma and similar astigmatism, field curvature and distortion. The MTF curves showed that all the model eyes diminished in resolution with increasing field of view, and the diminished tendency of resolution of our physical eye model was similar to the Navarro's eye. The measured parameters of our eye models with different status of ametropia were in line with the theoretical value. Conclusions The schematic eye model we designed can well simulate the optical performance of the human eye, and the fabricated physical one can be used as a tool in fundus

  19. [Rapid 3-Dimensional Models of Cerebral Aneurysm for Emergency Surgical Clipping].

    PubMed

    Konno, Takehiko; Mashiko, Toshihiro; Oguma, Hirofumi; Kaneko, Naoki; Otani, Keisuke; Watanabe, Eiju

    2016-08-01

    We developed a method for manufacturing solid models of cerebral aneurysms, with a shorter printing time than that involved in conventional methods, using a compact 3D printer with acrylonitrile-butadiene-styrene(ABS)resin. We further investigated the application and utility of this printing system in emergency clipping surgery. A total of 16 patients diagnosed with acute subarachnoid hemorrhage resulting from cerebral aneurysm rupture were enrolled in the present study. Emergency clipping was performed on the day of hospitalization. Digital Imaging and Communication in Medicine(DICOM)data obtained from computed tomography angiography(CTA)scans were edited and converted to stereolithography(STL)file formats, followed by the production of 3D models of the cerebral aneurysm by using the 3D printer. The mean time from hospitalization to the commencement of surgery was 242 min, whereas the mean time required for manufacturing the 3D model was 67 min. The average cost of each 3D model was 194 Japanese Yen. The time required for manufacturing the 3D models shortened to approximately 1 hour with increasing experience of producing 3D models. Favorable impressions for the use of the 3D models in clipping were reported by almost all neurosurgeons included in this study. Although 3D printing is often considered to involve huge costs and long manufacturing time, the method used in the present study requires shorter time and lower costs than conventional methods for manufacturing 3D cerebral aneurysm models, thus making it suitable for use in emergency clipping. PMID:27506842

  20. A 3-Dimensional Model of Water-Bearing Sequences in the Dominguez Gap Region, Long Beach, California

    USGS Publications Warehouse

    Ponti, Daniel J.; Ehman, Kenneth D.; Edwards, Brian D.; Tinsley, John C.; Hildenbrand, Thomas; Hillhouse, John W.; Hanson, Randall T.; McDougall, Kristen; Powell, Charles L.; Wan, Elmira; Land, Michael; Mahan, Shannon; Sarna-Wojcicki, Andrei M.

    2007-01-01

    A 3-dimensional computer model of the Quaternary sequence stratigraphy in the Dominguez gap region of Long Beach, California has been developed to provide a robust chronostratigraphic framework for hydrologic and tectonic studies. The model consists of 13 layers within a 16.5 by 16.1 km (10.25 by 10 mile) square area and extends downward to an altitude of -900 meters (-2952.76 feet). Ten sequences of late Pliocene to Holocene age are identified and correlated within the model. Primary data to build the model comes from five reference core holes, extensive high-resolution seismic data obtained in San Pedro Bay, and logs from several hundred water and oil wells drilled in the region. The model is best constrained in the vicinity of the Dominguez gap seawater intrusion barrier where a dense network of subsurface data exist. The resultant stratigraphic framework and geologic structure differs significantly from what has been proposed in earlier studies. An important new discovery from this approach is the recognition of ongoing tectonic deformation throughout nearly all of Quaternary time that has impacted the geometry and character of the sequences. Anticlinal folding along a NW-SE trend, probably associated with Quaternary reactivation of the Wilmington anticline, has uplifted and thinned deposits along the fold crest, which intersects the Dominguez gap seawater barrier near Pacific Coast Highway. A W-NW trending fault system that approximately parallels the fold crest has also been identified. This fault progressively displaces all but the youngest sequences down to the north and serves as the southern termination of the classic Silverado aquifer. Uplift and erosion of fining-upward paralic sequences along the crest of the young fold has removed or thinned many of the fine-grained beds that serve to protect the underlying Silverado aquifer from seawater contaminated shallow groundwater. As a result of this process, the potential exists for vertical migration of

  1. High fidelity 3-dimensional models of beam-electron cloud interactions in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feiz Zarrin Ghalam, Ali

    Electron cloud is a low-density electron profile created inside the vacuum chamber of circular machines with positively charged beams. Electron cloud limits the peak current of the beam and degrades the beams' quality through luminosity degradation, emittance growth and head to tail or bunch to bunch instability. The adverse effects of electron cloud on long-term beam dynamics becomes more and more important as the beams go to higher and higher energies. This problem has become a major concern in many future circular machines design like the Large Hadron Collider (LHC) under construction at European Center for Nuclear Research (CERN). Due to the importance of the problem several simulation models have been developed to model long-term beam-electron cloud interaction. These models are based on "single kick approximation" where the electron cloud is assumed to be concentrated at one thin slab around the ring. While this model is efficient in terms of computational costs, it does not reflect the real physical situation as the forces from electron cloud to the beam are non-linear contrary to this model's assumption. To address the existing codes limitation, in this thesis a new model is developed to continuously model the beam-electron cloud interaction. The code is derived from a 3-D parallel Particle-In-Cell (PIC) model (QuickPIC) originally used for plasma wakefield acceleration research. To make the original model fit into circular machines environment, betatron and synchrotron equations of motions have been added to the code, also the effect of chromaticity, lattice structure have been included. QuickPIC is then benchmarked against one of the codes developed based on single kick approximation (HEAD-TAIL) for the transverse spot size of the beam in CERN-LHC. The growth predicted by QuickPIC is less than the one predicted by HEAD-TAIL. The code is then used to investigate the effect of electron cloud image charges on the long-term beam dynamics, particularly on the

  2. Use of 3-Dimensional Volumetric Modeling of Adrenal Gland Size in Patients with Primary Pigmented Nodular Adrenocortical Disease.

    PubMed

    Chrysostomou, P P; Lodish, M B; Turkbey, E B; Papadakis, G Z; Stratakis, C A

    2016-04-01

    Primary pigmented nodular adrenocortical disease (PPNAD) is a rare type of bilateral adrenal hyperplasia leading to hypercortisolemia. Adrenal nodularity is often appreciable with computed tomography (CT); however, accurate radiologic characterization of adrenal size in PPNAD has not been studied well. We used 3-dimensional (3D) volumetric analysis to characterize and compare adrenal size in PPNAD patients, with and without Cushing's syndrome (CS). Patients diagnosed with PPNAD and their family members with known mutations in PRKAR1A were screened. CT scans were used to create 3D models of each adrenal. Criteria for biochemical diagnosis of CS included loss of diurnal variation and/or elevated midnight cortisol levels, and paradoxical increase in urinary free cortisol and/or urinary 17-hydroxysteroids after dexamethasone administration. Forty-five patients with PPNAD (24 females, 27.8±17.6 years) and 8 controls (19±3 years) were evaluated. 3D volumetric modeling of adrenal glands was performed in all. Thirty-eight patients out of 45 (84.4%) had CS. Their mean adrenal volume was 8.1 cc±4.1, 7.2 cc±4.5 (p=0.643) for non-CS, and 8.0cc±1.6 for controls. Mean values were corrected for body surface area; 4.7 cc/kg/m(2)±2.2 for CS, and 3.9 cc/kg/m(2)±1.3 for non-CS (p=0.189). Adrenal volume and midnight cortisol in both groups was positively correlated, r=0.35, p=0.03. We conclude that adrenal volume measured by 3D CT in patients with PPNAD and CS was similar to those without CS, confirming empirical CT imaging-based observations. However, the association between adrenal volume and midnight cortisol levels may be used as a marker of who among patients with PPNAD may develop CS, something that routine CT cannot do. PMID:27065461

  3. Surgical orthodontic treatment for a patient with advanced periodontal disease: evaluation with electromyography and 3-dimensional cone-beam computed tomography.

    PubMed

    Nakajima, Kan; Yamaguchi, Tetsutaro; Maki, Koutaro

    2009-09-01

    We report here the case of a woman with Class III malocclusion and advanced periodontal disease who was treated with surgical orthodontic correction. Functional recovery after orthodontic treatment is often monitored by serial electromyography of the masticatory muscles, whereas 3-dimensional cone-beam computed tomography can provide detailed structural information about, for example, periodontal bone defects. However, it is unclear whether the information obtained via these methods is sufficient to determine the treatment goal. It might be useful to address this issue for patients with advanced periodontal disease because of much variability between patients in the determination of treatment goals. We used detailed information obtained by 3-dimensional cone-beam computed tomography to identify periodontal bone defects and set appropriate treatment goals for inclination of the incisors and mandibular surgery. Results for this patient included stable occlusion and improved facial esthetics. This case report illustrates the benefits of establishing treatment goals acceptable to the patient, based on precise 3-dimensional assessment of dentoalveolar bone, and by using masticatory muscle activity to monitor the stability of occlusion.

  4. The Keilson and Storer 3-dimensional (KS-3D) line shape model: applications to optical diagnostic in combustion media

    SciTech Connect

    Joubert, Pierre

    2008-10-22

    High-resolution infrared and Raman spectroscopies require refine spectral line shape model to account for all observed features. For instance, for gaseous mixtures of light molecules with heavy perturbers, drastic changes arise particularly in the collision regime, resulting from the inhomogeneous effects due to the radiator speed-dependence of the collisional line broadening and line shifting parameters. Following our previous work concerning the collision regime, we have developed a new line shape modelization called the Keilson and Storer 3-dimensional line shape model to lower densities, when the Doppler contribution, and the collisional confinement narrowing can be no longer neglected. The consequences for optical diagnostics, particularly for H{sub 2}-N{sub 2} mixtures with high pressure and high temperature are presented. The effects of collisional relaxation on the spectral line shapes are discussed.

  5. The Keilson and Storer 3-dimensional (KS-3D) line shape model: applications to optical diagnostic in combustion media

    NASA Astrophysics Data System (ADS)

    Joubert, Pierre

    2008-10-01

    High-resolution infrared and Raman spectroscopies require refine spectral line shape model to account for all observed features. For instance, for gaseous mixtures of light molecules with heavy perturbers, drastic changes arise particularly in the collision regime, resulting from the inhomogeneous effects due to the radiator speed-dependence of the collisional line broadening and line shifting parameters. Following our previous work concerning the collision regime, we have developed a new line shape modelization called the Keilson and Storer 3-dimensional line shape model to lower densities, when the Doppler contribution, and the collisional confinement narrowing can be no longer neglected. The consequences for optical diagnostics, particularly for H2-N2 mixtures with high pressure and high temperature are presented. The effects of collisional relaxation on the spectral line shapes are discussed.

  6. Numerical model of electromagnetic scattering off a subterranean 3-dimensional dielectric

    SciTech Connect

    Dease, C.G.; Didwall, E.M.

    1983-08-01

    As part of the effort to develop On-Site Inspection (OSI) techniques for verification of compliance to a Comprehensive Test Ban Treaty (CTBT), a computer code was developed to predict the interaction of an electromagnetic (EM) wave with an underground cavity. Results from the code were used to evaluate the use of surface electromagnetic exploration techniques for detection of underground cavities or rubble-filled regions characteristic of underground nuclear explosions.

  7. Fast time variations of supernova neutrino signals from 3-dimensional models

    DOE PAGES

    Lund, Tina; Wongwathanarat, Annop; Janka, Hans -Thomas; Muller, Ewald; Raffelt, Georg

    2012-11-19

    Here, we study supernova neutrino flux variations in the IceCube detector, using 3D models based on a simplified neutrino transport scheme. The hemispherically integrated neutrino emission shows significantly smaller variations compared with our previous study of 2D models, largely because of the reduced activity of the standing accretion shock instability in this set of 3D models which we interpret as a pessimistic extreme. For the studied cases, intrinsic flux variations up to about 100 Hz frequencies could still be detected in a supernova closer than about 2 kpc.

  8. Visualization of the 3-dimensional flow around a model with the aid of a laser knife

    NASA Technical Reports Server (NTRS)

    Borovoy, V. Y.; Ivanov, V. V.; Orlov, A. A.; Kharchenko, V. N.

    1984-01-01

    A method for visualizing the three-dimensional flow around models of various shapes in a wind tunnel at a Mach number of 5 is described. A laser provides a planar light flux such that any plane through the model can be selectively illuminated. The shape of shock waves and separation regions is then determined by the intensity of light scattered by soot particles in the flow.

  9. Remanent magnetization and 3-dimensional density model of the Kentucky anomaly region

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.; Myers, D. M.

    1984-01-01

    A three-dimensional model of the Kentucky body was developed to fit surface gravity and long wavelength aeromagnetic data. Magnetization and density parameters for the model are much like those of Mayhew et al (1982). The magnetic anomaly due to the model at satellite altitude is shown to be much too small by itself to account for the anomaly measured by Magsat. It is demonstrated that the source region for the satellite anomaly is considerably more extensive than the Kentucky body sensu stricto. The extended source region is modeled first using prismatic model sources and then using dipole array sources. Magnetization directions for the source region found by inversion of various combinations of scalar and vector data are found to be close to the main field direction, implying the lack of a strong remanent component. It is shown by simulation that in a case (such as this) where the geometry of the source is known, if a strong remanent component is present its direction is readily detectable, but by scalar data as readily as vector data.

  10. A High Performance Pulsatile Pump for Aortic Flow Experiments in 3-Dimensional Models.

    PubMed

    Chaudhury, Rafeed A; Atlasman, Victor; Pathangey, Girish; Pracht, Nicholas; Adrian, Ronald J; Frakes, David H

    2016-06-01

    Aortic pathologies such as coarctation, dissection, and aneurysm represent a particularly emergent class of cardiovascular diseases. Computational simulations of aortic flows are growing increasingly important as tools for gaining understanding of these pathologies, as well as for planning their surgical repair. In vitro experiments are required to validate the simulations against real world data, and the experiments require a pulsatile flow pump system that can provide physiologic flow conditions characteristic of the aorta. We designed a newly capable piston-based pulsatile flow pump system that can generate high volume flow rates (850 mL/s), replicate physiologic waveforms, and pump high viscosity fluids against large impedances. The system is also compatible with a broad range of fluid types, and is operable in magnetic resonance imaging environments. Performance of the system was validated using image processing-based analysis of piston motion as well as particle image velocimetry. The new system represents a more capable pumping solution for aortic flow experiments than other available designs, and can be manufactured at a relatively low cost. PMID:26983961

  11. 3-dimensional spatially organized PEG-based hydrogels for an aortic valve co-culture model

    PubMed Central

    Puperi, Daniel S.; Balaoing, Liezl R.; O’Connell, Ronan W.; West, Jennifer L.; Grande-Allen, K. Jane

    2015-01-01

    Physiologically relevant in vitro models are needed to study disease progression and to develop and screen potential therapeutic interventions for disease. Heart valve disease, in particular, has no early intervention or non-invasive treatment because there is a lack of understanding the cellular mechanisms which lead to disease. Here, we establish a novel, customizable synthetic hydrogel platform that can be used to study cell-cell interactions and the factors which contribute to valve disease. Spatially localized cell adhesive ligands bound in the scaffold promote cell growth and organization of valve interstitial cells and valve endothelial cells in 3D co-culture. Both cell types maintained phenotypes, homeostatic functions, and produced zonally localized extracellular matrix. This model extends the capabilities of in vitro research by providing a platform to perform direct contact co-culture with cells in their physiologically relevant spatial arrangement. PMID:26241755

  12. 3-DIMENSIONAL Geological Mapping and Modeling Activities at the Geological Survey of Norway

    NASA Astrophysics Data System (ADS)

    Jarna, A.; Bang-Kittilsen, A.; Haase, C.; Henderson, I. H. C.; Høgaas, F.; Iversen, S.; Seither, A.

    2015-10-01

    Geology and all geological structures are three-dimensional in space. Geology can be easily shown as four-dimensional when time is considered. Therefore GIS, databases, and 3D visualization software are common tools used by geoscientists to view, analyse, create models, interpret and communicate geological data. The NGU (Geological Survey of Norway) is the national institution for the study of bedrock, mineral resources, surficial deposits and groundwater and marine geology. The interest in 3D mapping and modelling has been reflected by the increase of number of groups and researches dealing with 3D in geology within NGU. This paper highlights 3D geological modelling techniques and the usage of these tools in bedrock, geophysics, urban and groundwater studies at NGU, same as visualisation of 3D online. The examples show use of a wide range of data, methods, software and an increased focus on interpretation and communication of geology in 3D. The goal is to gradually expand the geospatial data infrastructure to include 3D data at the same level as 2D.

  13. Evaluation of 3-Dimensional Superimposition Techniques on Various Skeletal Structures of the Head Using Surface Models

    PubMed Central

    Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    Objectives To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Methods Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. Results There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.790.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Conclusions Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In

  14. A 3-dimensional in vitro model of epithelioid granulomas induced by high aspect ratio nanomaterials

    PubMed Central

    2011-01-01

    Background The most common causes of granulomatous inflammation are persistent pathogens and poorly-degradable irritating materials. A characteristic pathological reaction to intratracheal instillation, pharyngeal aspiration, or inhalation of carbon nanotubes is formation of epithelioid granulomas accompanied by interstitial fibrosis in the lungs. In the mesothelium, a similar response is induced by high aspect ratio nanomaterials, including asbestos fibers, following intraperitoneal injection. This asbestos-like behaviour of some engineered nanomaterials is a concern for their potential adverse health effects in the lungs and mesothelium. We hypothesize that high aspect ratio nanomaterials will induce epithelioid granulomas in nonadherent macrophages in 3D cultures. Results Carbon black particles (Printex 90) and crocidolite asbestos fibers were used as well-characterized reference materials and compared with three commercial samples of multiwalled carbon nanotubes (MWCNTs). Doses were identified in 2D and 3D cultures in order to minimize acute toxicity and to reflect realistic occupational exposures in humans and in previous inhalation studies in rodents. Under serum-free conditions, exposure of nonadherent primary murine bone marrow-derived macrophages to 0.5 μg/ml (0.38 μg/cm2) of crocidolite asbestos fibers or MWCNTs, but not carbon black, induced macrophage differentiation into epithelioid cells and formation of stable aggregates with the characteristic morphology of granulomas. Formation of multinucleated giant cells was also induced by asbestos fibers or MWCNTs in this 3D in vitro model. After 7-14 days, macrophages exposed to high aspect ratio nanomaterials co-expressed proinflammatory (M1) as well as profibrotic (M2) phenotypic markers. Conclusions Induction of epithelioid granulomas appears to correlate with high aspect ratio and complex 3D structure of carbon nanotubes, not with their iron content or surface area. This model offers a time- and cost

  15. Verification and transfer of thermal pollution model. Volume 3: Verification of 3-dimensional rigid-lid model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.

    1982-01-01

    The six-volume report: describes the theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth (e.g., natural or man-made inland lakes) because surface elevation has been removed as a parameter. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free-surface model also provides surface height variations with time.

  16. Global simulation of canopy scale sun-induced chlorophyll fluorescence with a 3 dimensional radiative transfer model

    NASA Astrophysics Data System (ADS)

    Kobayashi, H.; Yang, W.; Ichii, K.

    2015-12-01

    Global simulation of canopy scale sun-induced chlorophyll fluorescence with a 3 dimensional radiative transfer modelHideki Kobayashi, Wei Yang, and Kazuhito IchiiDepartment of Environmental Geochemical Cycle Research, Japan Agency for Marine-Earth Science and Technology3173-25, Showa-machi, Kanazawa-ku, Yokohama, Japan.Plant canopy scale sun-induced chlorophyll fluorescence (SIF) can be observed from satellites, such as Greenhouse gases Observation Satellite (GOSAT), Orbiting Carbon Observatory-2 (OCO-2), and Global Ozone Monitoring Experiment-2 (GOME-2), using Fraunhofer lines in the near infrared spectral domain [1]. SIF is used to infer photosynthetic capacity of plant canopy [2]. However, it is not well understoond how the leaf-level SIF emission contributes to the top of canopy directional SIF because SIFs observed by the satellites use the near infrared spectral domain where the multiple scatterings among leaves are not negligible. It is necessary to quantify the fraction of emission for each satellite observation angle. Absorbed photosynthetically active radiation of sunlit leaves are 100 times higher than that of shaded leaves. Thus, contribution of sunlit and shaded leaves to canopy scale directional SIF emission should also be quantified. Here, we show the results of global simulation of SIF using a 3 dimensional radiative transfer simulation with MODIS atmospheric (aerosol optical thickness) and land (land cover and leaf area index) products and a forest landscape data sets prepared for each land cover category. The results are compared with satellite-based SIF (e.g. GOME-2) and the gross primary production empirically estimated by FLUXNET and remote sensing data.

  17. Verification and transfer of thermal pollution model. Volume 2: User's manual for 3-dimensional free-surface model

    NASA Technical Reports Server (NTRS)

    Lee, S. S.; Sengupta, S.; Tuann, S. Y.; Lee, C. R.

    1982-01-01

    The six-volume report: describes the theory of a three-dimensional (3-D) mathematical thermal discharge model and a related one-dimensional (1-D) model, includes model verification at two sites, and provides a separate user's manual for each model. The 3-D model has two forms: free surface and rigid lid. The former, verified at Anclote Anchorage (FL), allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth; e.g., estuaries and coastal regions. The latter, verified at Lake Keowee (SC), is suited for small surface wave heights compared to depth. These models allow computation of time-dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions.

  18. A 3-dimensional human embryonic stem cell (hESC)-derived model to detect developmental neurotoxicity of nanoparticles.

    PubMed

    Hoelting, Lisa; Scheinhardt, Benjamin; Bondarenko, Olesja; Schildknecht, Stefan; Kapitza, Marion; Tanavde, Vivek; Tan, Betty; Lee, Qian Yi; Mecking, Stefan; Leist, Marcel; Kadereit, Suzanne

    2013-04-01

    Nanoparticles (NPs) have been shown to accumulate in organs, cross the blood-brain barrier and placenta, and have the potential to elicit developmental neurotoxicity (DNT). Here, we developed a human embryonic stem cell (hESC)-derived 3-dimensional (3-D) in vitro model that allows for testing of potential developmental neurotoxicants. Early central nervous system PAX6(+) precursor cells were generated from hESCs and differentiated further within 3-D structures. The 3-D model was characterized for neural marker expression revealing robust differentiation toward neuronal precursor cells, and gene expression profiling suggested a predominantly forebrain-like development. Altered neural gene expression due to exposure to non-cytotoxic concentrations of the known developmental neurotoxicant, methylmercury, indicated that the 3-D model could detect DNT. To test for specific toxicity of NPs, chemically inert polyethylene NPs (PE-NPs) were chosen. They penetrated deep into the 3-D structures and impacted gene expression at non-cytotoxic concentrations. NOTCH pathway genes such as HES5 and NOTCH1 were reduced in expression, as well as downstream neuronal precursor genes such as NEUROD1 and ASCL1. FOXG1, a patterning marker, was also reduced. As loss of function of these genes results in severe nervous system impairments in mice, our data suggest that the 3-D hESC-derived model could be used to test for Nano-DNT.

  19. Role of preoperative 3-dimensional computed tomography reconstruction in depressed skull fractures treated with craniectomy: a case report of forensic interest.

    PubMed

    Viel, Guido; Cecchetto, Giovanni; Manara, Renzo; Cecchetto, Attilio; Montisci, Massimo

    2011-06-01

    Patients affected by cranial trauma with depressed skull fractures and increased intracranial pressure generally undergo neurosurgical intervention. Because craniotomy and craniectomy remove skull fragments and generate new fracture lines, they complicate forensic examination and sometimes prevent a clear identification of skull fracture etiology. A 3-dimensional reconstruction based on preoperative computed tomography (CT) scans, giving a picture of the injuries before surgical intervention, can help the forensic examiner in identifying skull fracture origin and the means of production.We report the case of a 41-year-old-man presenting at the emergency department with a depressed skull fracture at the vertex and bilateral subdural hemorrhage. The patient underwent 2 neurosurgical interventions (craniotomy and craniectomy) but died after 40 days of hospitalization in an intensive care unit. At autopsy, the absence of various bone fragments did not allow us to establish if the skull had been stricken by a blunt object or had hit the ground with high kinetic energy. To analyze bone injuries before craniectomy, a 3-dimensional CT reconstruction based on preoperative scans was performed. A comparative analysis between autoptic and radiological data allowed us to differentiate surgical from traumatic injuries. Moreover, based on the shape and size of the depressed skull fracture (measured from the CT reformations), we inferred that the man had been stricken by a cylindric blunt object with a diameter of about 3 cm. PMID:21512384

  20. Effect of Heat-Inactivated Clostridium sporogenes and Its Conditioned Media on 3-Dimensional Colorectal Cancer Cell Models

    PubMed Central

    Bhave, Madhura Satish; Hassanbhai, Ammar Mansoor; Anand, Padmaja; Luo, Kathy Qian; Teoh, Swee Hin

    2015-01-01

    Traditional cancer treatments, such as chemotherapy and radiation therapy continue to have limited efficacy due to tumor hypoxia. While bacterial cancer therapy has the potential to overcome this problem, it comes with the risk of toxicity and infection. To circumvent these issues, this paper investigates the anti-tumor effects of non-viable bacterial derivatives of Clostridium sporogenes. These non-viable derivatives are heat-inactivated C. sporogenes bacteria (IB) and the secreted bacterial proteins in culture media, known as conditioned media (CM). In this project, the effects of IB and CM on CT26 and HCT116 colorectal cancer cells were examined on a 2-Dimensional (2D) and 3-Dimensional (3D) platform. IB significantly inhibited cell proliferation of CT26 to 6.3% of the control in 72 hours for the 2D monolayer culture. In the 3D spheroid culture, cell proliferation of HCT116 spheroids notably dropped to 26.2%. Similarly the CM also remarkably reduced the cell-proliferation of the CT26 cells to 2.4% and 20% in the 2D and 3D models, respectively. Interestingly the effect of boiled conditioned media (BCM) on the cells in the 3D model was less inhibitory than that of CM. Thus, the inhibitive effect of inactivated C. sporogenes and its conditioned media on colorectal cancer cells is established. PMID:26507312

  1. 3-dimensional Modeling of Electromagnetic and Physical Sources of Aziumuthal Nonuniformities in Inductively Coupled Plasmas for Deposition

    NASA Astrophysics Data System (ADS)

    Lu, Junqing; Keiter, Eric R.; Kushner, Mark J.

    1998-10-01

    Inductively Coupled Plasmas (ICPs) are being used for a variety of deposition processes for microelectronics fabrication. Of particular concern in scaling these devices to large areas is maintaining azimuthal symmetry of the reactant fluxes. Sources of nonuniformity may be physical (e.g., gas injection and side pumping) or electromagnetic (e.g., transmission line effects in the antennas). In this paper, a 3-dimensional plasma equipment model, HPEM-3D,(M. J. Kushner, J. Appl. Phys. v.82, 5312 (1997).) is used to investigate physical and electromagentic sources of azimuthal nonuniformities in deposition tools. An ionized metal physical vapor deposition (IMPVD) system will be investigated where transmission line effects in the coils produce an asymmetric plasma density. Long mean free path transport for sputtered neutrals and tensor conducitivities have been added to HPEM-3D to address this system. Since the coil generated ion flux drifts back to the target to sputter low ionization potential metal atoms, the asymmetry is reinforced by rapid ionization of the metal atoms.

  2. Molecular profiling of the invasive tumor microenvironment in a 3-dimensional model of colorectal cancer cells and ex vivo fibroblasts.

    PubMed

    Bullock, Marc D; Mellone, Max; Pickard, Karen M; Sayan, Abdulkadir Emre; Mitter, Richard; Primrose, John N; Packham, Graham K; Thomas, Gareth; Mirnezami, Alexander H

    2014-01-01

    Invading colorectal cancer (CRC) cells have acquired the capacity to break free from their sister cells, infiltrate the stroma, and remodel the extracellular matrix (ECM). Characterizing the biology of this phenotypically distinct group of cells could substantially improve our understanding of early events during the metastatic cascade. Tumor invasion is a dynamic process facilitated by bidirectional interactions between malignant epithelium and the cancer associated stroma. In order to examine cell-specific responses at the tumor stroma-interface we have combined organotypic co-culture and laser micro-dissection techniques. Organotypic models, in which key stromal constituents such as fibroblasts are 3-dimensionally co-cultured with cancer epithelial cells, are highly manipulatable experimental tools which enable invasion and cancer-stroma interactions to be studied in near-physiological conditions. Laser microdissection (LMD) is a technique which entails the surgical dissection and extraction of the various strata within tumor tissue, with micron level precision. By combining these techniques with genomic, transcriptomic and epigenetic profiling we aim to develop a deeper understanding of the molecular characteristics of invading tumor cells and surrounding stromal tissue, and in doing so potentially reveal novel biomarkers and opportunities for drug development in CRC. PMID:24836208

  3. Development of a high-throughput screening assay based on the 3-dimensional pannus model for rheumatoid arthritis.

    PubMed

    Ibold, Yvonne; Frauenschuh, Simone; Kaps, Christian; Sittinger, Michael; Ringe, Jochen; Goetz, Peter M

    2007-10-01

    The 3-dimensional (3-D) pannus model for rheumatoid arthritis (RA) is based on the interactive co-culture of cartilage and synovial fibroblasts (SFs). Besides the investigation of the pathogenesis of RA, it can be used to analyze the active profiles of antirheumatic pharmaceuticals and other bioactive substances under in vitro conditions. For a potential application in the industrial drug-screening process as a transitional step between 2-dimensional (2-D) cell-based assays and in vivo animal studies, the pannus model was developed into an in vitro high-throughput screening (HTS) assay. Using the CyBitrade mark-Disk workstation for parallel liquid handling, the main cell culture steps of cell seeding and cultivation were automated. Chondrocytes were isolated from articular cartilage and seeded directly into 96-well microplates in high-density pellets to ensure formation of cartilage-specific extracellular matrix (ECM). Cell seeding was performed automatically and manually to compare both processes regarding accuracy, reproducibility, consistency, and handling time. For automated cultivation of the chondrocyte pellet cultures, a sequential program was developed using the CyBio Control software to minimize shear forces and handling time. After 14 days of cultivation, the pannus model was completed by coating the cartilage pellets with a layer of human SFs. The effects due to automation in comparison to manual handling were analyzed by optical analysis of the pellets, histological and immunohistochemical staining, and real-time PCR. Automation of this in vitro model was successfully achieved and resulted in an improved quality of the generated pannus cultures by enhancing the formation of cartilage-specific ECM. In addition, automated cell seeding and media exchange increased the efficiency due to a reduction of labor intensity and handling time.

  4. Three-Dimensional Radiobiologic Dosimetry: Application of Radiobiologic Modeling to Patient-Specific 3-Dimensional Imaging–Based Internal Dosimetry

    PubMed Central

    Prideaux, Andrew R.; Song, Hong; Hobbs, Robert F.; He, Bin; Frey, Eric C.; Ladenson, Paul W.; Wahl, Richard L.; Sgouros, George

    2010-01-01

    Phantom-based and patient-specific imaging-based dosimetry methodologies have traditionally yielded mean organ-absorbed doses or spatial dose distributions over tumors and normal organs. In this work, radiobiologic modeling is introduced to convert the spatial distribution of absorbed dose into biologically effective dose and equivalent uniform dose parameters. The methodology is illustrated using data from a thyroid cancer patient treated with radioiodine. Methods Three registered SPECT/CT scans were used to generate 3-dimensional images of radionuclide kinetics (clearance rate) and cumulated activity. The cumulated activity image and corresponding CT scan were provided as input into an EGSnrc-based Monte Carlo calculation: The cumulated activity image was used to define the distribution of decays, and an attenuation image derived from CT was used to define the corresponding spatial tissue density and composition distribution. The rate images were used to convert the spatial absorbed dose distribution to a biologically effective dose distribution, which was then used to estimate a single equivalent uniform dose for segmented volumes of interest. Equivalent uniform dose was also calculated from the absorbed dose distribution directly. Results We validate the method using simple models; compare the dose-volume histogram with a previously analyzed clinical case; and give the mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for an illustrative case of a pediatric thyroid cancer patient with diffuse lung metastases. The mean absorbed dose, mean biologically effective dose, and equivalent uniform dose for the tumor were 57.7, 58.5, and 25.0 Gy, respectively. Corresponding values for normal lung tissue were 9.5, 9.8, and 8.3 Gy, respectively. Conclusion The analysis demonstrates the impact of radiobiologic modeling on response prediction. The 57% reduction in the equivalent dose value for the tumor reflects a high level of dose

  5. Evaluation of the anterior mandibular donor site one year after secondary reconstruction of an alveolar cleft: 3-dimensional analysis using cone-beam computed tomography.

    PubMed

    van Bilsen, M W T; Schreurs, R; Meulstee, J W; Kuijpers, M A R; Meijer, G J; Borstlap, W A; Bergé, S J; Maal, T J J

    2015-10-01

    The aim of this study was to analyse changes in the volume of the chin after harvest of a bone graft for secondary reconstruction of an alveolar cleft. Cone-beam computed tomographic (CT) scans of 27 patients taken preoperatively, and immediately and one year postoperatively, were analysed, and 3-dimensional hard-tissue reconstructions made. The hard-tissue segmentation of the scan taken one year postoperatively was subtracted from the segmentation of the preoperative scan to calculate the alteration in the volume of bone at the donor site (chin). A centrally-orientated persistent concavity at the buccal side of the chin was found (mean (range) 160 (0-500) mm(3)). At the lingual side of the chin, a central concavity remained (mean (range) volume 20 (0-80) mm(3)). Remarkably, at the periphery of this concavity there was overgrowth of new bone (mean (range) volume 350 (0-1600) mm(3)). Re-attachment of the muscles of the tongue resulted in a significantly larger central lingual defect one year postoperatively (p=0.01). We also measured minor alterations in volume of the chin at one year. Whether these alterations influence facial appearance and long term bony quality is to be the subject of further research.

  6. Normal growth and development of the lips: a 3-dimensional study from 6 years to adulthood using a geometric model

    PubMed Central

    FERRARIO, VIRGILIO F.; SFORZA, CHIARELLA; SCHMITZ, JOHANNES H.; CIUSA, VERONICA; COLOMBO, ANNA

    2000-01-01

    A 3-dimensional computerised system with landmark representation of the soft-tissue facial surface allows noninvasive and fast quantitative study of facial growth. The aims of the present investigation were (1) to provide reference data for selected dimensions of lips (linear distances and ratios, vermilion area, volume); (2) to quantify the relevant growth changes; and (3) to evaluate sex differences in growth patterns. The 3-dimensional coordinates of 6 soft-tissue landmarks on the lips were obtained by an optoelectronic instrument in a mixed longitudinal and cross-sectional study (2023 examinations in 1348 healthy subjects between 6 y of age and young adulthood). From the landmarks, several linear distances (mouth width, total vermilion height, total lip height, upper lip height), the vermilion height-to-mouth width ratio, some areas (vermilion of the upper lip, vermilion of the lower lip, total vermilion) and volumes (upper lip volume, lower lip volume, total lip volume) were calculated and averaged for age and sex. Male values were compared with female values by means of Student's t test. Within each age group all lip dimensions (distances, areas, volumes) were significantly larger in boys than in girls (P < 0.05), with some exceptions in the first age groups and coinciding with the earlier female growth spurt, whereas the vermilion height-to-mouth width ratio did not show a corresponding sexual dimorphism. Linear distances in girls had almost reached adult dimensions in the 13–14 y age group, while in boys a large increase was still to occur. The attainment of adult dimensions was faster in the upper than in the lower lip, especially in girls. The method used in the present investigation allowed the noninvasive evaluation of a large sample of nonpatient subjects, leading to the definition of 3-dimensional normative data. Data collected in the present study could represent a data base for the quantitative description of human lip morphology from childhood to

  7. Normal growth and development of the lips: a 3-dimensional study from 6 years to adulthood using a geometric model.

    PubMed

    Ferrario, V F; Sforza, C; Schmitz, J H; Ciusa, V; Colombo, A

    2000-04-01

    A 3-dimensional computerised system with landmark representation of the soft-tissue facial surface allows noninvasive and fast quantitative study of facial growth. The aims of the present investigation were (1) to provide reference data for selected dimensions of lips (linear distances and ratios, vermilion area, volume); (2) to quantify the relevant growth changes; and (3) to evaluate sex differences in growth patterns. The 3-dimensional coordinates of 6 soft-tissue landmarks on the lips were obtained by an optoelectronic instrument in a mixed longitudinal and cross-sectional study (2023 examinations in 1348 healthy subjects between 6 y of age and young adulthood). From the landmarks, several linear distances (mouth width, total vermilion height, total lip height, upper lip height), the vermilion height-to-mouth width ratio, some areas (vermilion of the upper lip, vermilion of the lower lip, total vermilion) and volumes (upper lip volume, lower lip volume, total lip volume) were calculated and averaged for age and sex. Male values were compared with female values by means of Student's t test. Within each age group all lip dimensions (distances, areas, volumes) were significantly larger in boys than in girls (P < 0.05), with some exceptions in the first age groups and coinciding with the earlier female growth spurt, whereas the vermilion height-to-mouth width ratio did not show a corresponding sexual dimorphism. Linear distances in girls had almost reached adult dimensions in the 13-14 y age group, while in boys a large increase was still to occur. The attainment of adult dimensions was faster in the upper than in the lower lip, especially in girls. The method used in the present investigation allowed the noninvasive evaluation of a large sample of nonpatient subjects, leading to the definition of 3-dimensional normative data. Data collected in the present study could represent a data base for the quantitative description of human lip morphology from childhood to

  8. Noninvasive 3-dimensional imaging of liver regeneration in a mouse model of hereditary tyrosinemia type 1 using the sodium iodide symporter gene.

    PubMed

    Hickey, Raymond D; Mao, Shennen A; Amiot, Bruce; Suksanpaisan, Lukkana; Miller, Amber; Nace, Rebecca; Glorioso, Jaime; O'Connor, Michael K; Peng, Kah Whye; Ikeda, Yasuhiro; Russell, Stephen J; Nyberg, Scott L

    2015-04-01

    Cell transplantation is a potential treatment for the many liver disorders that are currently only curable by organ transplantation. However, one of the major limitations of hepatocyte (HC) transplantation is an inability to monitor cells longitudinally after injection. We hypothesized that the thyroidal sodium iodide symporter (NIS) gene could be used to visualize transplanted HCs in a rodent model of inherited liver disease: hereditary tyrosinemia type 1. Wild-type C57Bl/6J mouse HCs were transduced ex vivo with a lentiviral vector containing the mouse Slc5a5 (NIS) gene controlled by the thyroxine-binding globulin promoter. NIS-transduced cells could robustly concentrate radiolabeled iodine in vitro, with lentiviral transduction efficiencies greater than 80% achieved in the presence of dexamethasone. Next, NIS-transduced HCs were transplanted into congenic fumarylacetoacetate hydrolase knockout mice, and this resulted in the prevention of liver failure. NIS-transduced HCs were readily imaged in vivo by single-photon emission computed tomography, and this demonstrated for the first time noninvasive 3-dimensional imaging of regenerating tissue in individual animals over time. We also tested the efficacy of primary HC spheroids engrafted in the liver. With the NIS reporter, robust spheroid engraftment and survival could be detected longitudinally after direct parenchymal injection, and this thereby demonstrated a novel strategy for HC transplantation. This work is the first to demonstrate the efficacy of NIS imaging in the field of HC transplantation. We anticipate that NIS labeling will allow noninvasive and longitudinal identification of HCs and stem cells in future studies related to liver regeneration in small and large preclinical animal models.

  9. ABSTRACTION OF INFORMATION FROM 2- AND 3-DIMENSIONAL PORFLOW MODELS INTO A 1-D GOLDSIM MODEL - 11404

    SciTech Connect

    Taylor, G.; Hiergesell, R.

    2010-11-16

    The Savannah River National Laboratory has developed a 'hybrid' approach to Performance Assessment modeling which has been used for a number of Performance Assessments. This hybrid approach uses a multi-dimensional modeling platform (PorFlow) to develop deterministic flow fields and perform contaminant transport. The GoldSim modeling platform is used to develop the Sensitivity and Uncertainty analyses. Because these codes are performing complementary tasks, it is incumbent upon them that for the deterministic cases they produce very similar results. This paper discusses two very different waste forms, one with no engineered barriers and one with engineered barriers, each of which present different challenges to the abstraction of data. The hybrid approach to Performance Assessment modeling used at the SRNL uses a 2-D unsaturated zone (UZ) and a 3-D saturated zone (SZ) model in the PorFlow modeling platform. The UZ model consists of the waste zone and the unsaturated zoned between the waste zone and the water table. The SZ model consists of source cells beneath the waste form to the points of interest. Both models contain 'buffer' cells so that modeling domain boundaries do not adversely affect the calculation. The information pipeline between the two models is the contaminant flux. The domain contaminant flux, typically in units of moles (or Curies) per year from the UZ model is used as a boundary condition for the source cells in the SZ. The GoldSim modeling component of the hybrid approach is an integrated UZ-SZ model. The model is a 1-D representation of the SZ, typically 1-D in the UZ, but as discussed below, depending on the waste form being analyzed may contain pseudo-2-D elements. A waste form at the Savannah River Site (SRS) which has no engineered barriers is commonly referred to as a slit trench. A slit trench, as its name implies, is an unlined trench, typically 6 m deep, 6 m wide, and 200 m long. Low level waste consisting of soil, debris, rubble, wood

  10. The use of TOUGH2 for the LBL/USGS 3-dimensional site-scale model of Yucca Mountain, Nevada

    SciTech Connect

    Bodvarsson, G.; Chen, G.; Haukwa, C.; Kwicklis, E.

    1995-12-31

    The three-dimensional site-scale numerical model o the unsaturated zone at Yucca Mountain is under continuous development and calibration through a collaborative effort between Lawrence Berkeley Laboratory (LBL) and the United States Geological Survey (USGS). The site-scale model covers an area of about 30 km{sup 2} and is bounded by major fault zones to the west (Solitario Canyon Fault), east (Bow Ridge Fault) and perhaps to the north by an unconfirmed fault (Yucca Wash Fault). The model consists of about 5,000 grid blocks (elements) with nearly 20,000 connections between them; the grid was designed to represent the most prevalent geological and hydro-geological features of the site including major faults, and layering and bedding of the hydro-geological units. Submodels are used to investigate specific hypotheses and their importance before incorporation into the three-dimensional site-scale model. The primary objectives of the three-dimensional site-scale model are to: (1) quantify moisture, gas and heat flows in the ambient conditions at Yucca Mountain, (2) help in guiding the site-characterization effort (primarily by USGS) in terms of additional data needs and to identify regions of the mountain where sufficient data have been collected, and (3) provide a reliable model of Yucca Mountain that is validated by repeated predictions of conditions in new boreboles and the ESF and has therefore the confidence of the public and scientific community. The computer code TOUGH2 developed by K. Pruess at LBL was used along with the three-dimensional site-scale model to generate these results. In this paper, we also describe the three-dimensional site-scale model emphasizing the numerical grid development, and then show some results in terms of moisture, gas and heat flow.

  11. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  12. 3DHYDROGEOCHEM: A 3-DIMENSIONAL MODEL OF DENSITY-DEPENDENT SUBSURFACE FLOW AND THERMAL MULTISPECIES-MULTICOMPONENT HYDROGEOCHEMICAL TRANSPORT

    EPA Science Inventory

    This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able...

  13. Computation of synthetic seismograms in a 3 dimensional Earth and inversion of eigenfrequency and Q quality factor datasets of normal modes

    NASA Astrophysics Data System (ADS)

    Roch, Julien; Clevede, Eric; Roult, Genevieve

    2010-05-01

    The 26 December 2004 Sumatra-Andaman event is the third biggest earthquake that has never been recorded but the first recorded with high quality broad-band seismometers. Such an earthquake offered a good opportunity for studying the normal modes of the Earth and particularly the gravest ones (frequency lower than 1 mHz) which provide important information on deep Earth. The splitting of some modes has been carefully analyzed. The eigenfrequencies and the Q quality factors of particular singlets have been retrieved with an unprecedented precision. In some cases, the eigenfrequencies of some singlets exhibit a clear shift when compared to the theoretical eigenfrequencies. Some core modes such as the 3S2 mode present an anomalous splitting, that is to say, a splitting width much larger than the expected one. Such anomalous splitting is presently admitted to be due to the existence of lateral heterogeneities in the inner core. We need an accurate model of the whole Earth and a method to compute synthetic seismograms in order to compare synthetic and observed data and to explain the behavior of such modes. Synthetic seismograms are computed by normal modes summation using a perturbative method developed up to second order in amplitude and up to third order in frequency (HOPT method). The last step consists in inverting both eigenfrequency and Q quality factor datasets in order to better constrain the deep Earth structure and especially the inner core. In order to find models of acceptable data fit in a multidimensional parameter space, we use the neighborhood algorithm method which is a derivative-free search method. It is particularly well adapted in our case (non linear problem) and is easy to tune with only 2 parameters. Our purpose is to find an ensemble of models that fit the data rather than a unique model.

  14. Direct measurement of the 3-dimensional DNA lesion distribution induced by energetic charged particles in a mouse model tissue

    PubMed Central

    Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus

    2015-01-01

    Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532

  15. Direct measurement of the 3-dimensional DNA lesion distribution induced by energetic charged particles in a mouse model tissue.

    PubMed

    Mirsch, Johanna; Tommasino, Francesco; Frohns, Antonia; Conrad, Sandro; Durante, Marco; Scholz, Michael; Friedrich, Thomas; Löbrich, Markus

    2015-10-01

    Charged particles are increasingly used in cancer radiotherapy and contribute significantly to the natural radiation risk. The difference in the biological effects of high-energy charged particles compared with X-rays or γ-rays is determined largely by the spatial distribution of their energy deposition events. Part of the energy is deposited in a densely ionizing manner in the inner part of the track, with the remainder spread out more sparsely over the outer track region. Our knowledge about the dose distribution is derived solely from modeling approaches and physical measurements in inorganic material. Here we exploited the exceptional sensitivity of γH2AX foci technology and quantified the spatial distribution of DNA lesions induced by charged particles in a mouse model tissue. We observed that charged particles damage tissue nonhomogenously, with single cells receiving high doses and many other cells exposed to isolated damage resulting from high-energy secondary electrons. Using calibration experiments, we transformed the 3D lesion distribution into a dose distribution and compared it with predictions from modeling approaches. We obtained a radial dose distribution with sub-micrometer resolution that decreased with increasing distance to the particle path following a 1/r2 dependency. The analysis further revealed the existence of a background dose at larger distances from the particle path arising from overlapping dose deposition events from independent particles. Our study provides, to our knowledge, the first quantification of the spatial dose distribution of charged particles in biologically relevant material, and will serve as a benchmark for biophysical models that predict the biological effects of these particles. PMID:26392532

  16. New 3-dimensional CFD modeling of CO2 and H2S simultaneous stripping from water within PVDF hollow fiber membrane contactor

    NASA Astrophysics Data System (ADS)

    Bahlake, Ahmad; Farivar, Foad; Dabir, Bahram

    2016-07-01

    In this paper a 3-dimensional modeling of simultaneous stripping of carbon dioxide (CO2) and hydrogen sulfide (H2S) from water using hollow fiber membrane made of polyvinylidene fluoride is developed. The water, containing CO2 and H2S enters to the membrane as feed. At the same time, pure nitrogen flow in the shell side of a shell and tube hollow fiber as the solvent. In the previous methods of modeling hollow fiber membranes just one of the membranes was modeled and the results expand to whole shell and tube system. In this research the whole hollow fiber shell and tube module is modeled to reduce the errors. Simulation results showed that increasing the velocity of solvent flow and decreasing the velocity of the feed are leads to increase in the system yield. However the effect of the feed velocity on the process is likely more than the influence of changing the velocity of the gaseous solvent. In addition H2S stripping has higher yield in comparison with CO2 stripping. This model is compared to the previous modeling methods and shows that the new model is more accurate. Finally, the effect of feed temperature is studied using response surface method and the operating conditions of feed temperature, feed velocity, and solvent velocity is optimized according to synergistic effects. Simulation results show that, in the optimum operating conditions the removal percentage of H2S and CO2 are 27 and 21 % respectively.

  17. Comparative Validity and Reproducibility Study of Various Landmark-Oriented Reference Planes in 3-Dimensional Computed Tomographic Analysis for Patients Receiving Orthognathic Surgery

    PubMed Central

    Lin, Hsiu-Hsia; Chuang, Ya-Fang; Weng, Jing-Ling; Lo, Lun-Jou

    2015-01-01

    Background Three-dimensional computed tomographic imaging has become popular in clinical evaluation, treatment planning, surgical simulation, and outcome assessment for maxillofacial intervention. The purposes of this study were to investigate whether there is any correlation among landmark-based horizontal reference planes and to validate the reproducibility and reliability of landmark identification. Materials and Methods Preoperative and postoperative cone-beam computed tomographic images of patients who had undergone orthognathic surgery were collected. Landmark-oriented reference planes including the Frankfort horizontal plane (FHP) and the lateral semicircular canal plane (LSP) were established. Four FHPs were defined by selecting 3 points from the orbitale, porion, or midpoint of paired points. The LSP passed through both the lateral semicircular canal points and nasion. The distances between the maxillary or mandibular teeth and the reference planes were measured, and the differences between the 2 sides were calculated and compared. The precision in locating the landmarks was evaluated by performing repeated tests, and the intraobserver reproducibility and interobserver reliability were assessed. Results A total of 30 patients with facial deformity and malocclusion—10 patients with facial symmetry, 10 patients with facial asymmetry, and 10 patients with cleft lip and palate—were recruited. Comparing the differences among the 5 reference planes showed no statistically significant difference among all patient groups. Regarding intraobserver reproducibility, the mean differences in the 3 coordinates varied from 0 to 0.35 mm, with correlation coefficients between 0.96 and 1.0, showing high correlation between repeated tests. Regarding interobserver reliability, the mean differences among the 3 coordinates varied from 0 to 0.47 mm, with correlation coefficients between 0.88 and 1.0, exhibiting high correlation between the different examiners. Conclusions The

  18. Computational Modeling of Tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Tanner, John A. (Compiler)

    1995-01-01

    This document contains presentations and discussions from the joint UVA/NASA Workshop on Computational Modeling of Tires. The workshop attendees represented NASA, the Army and Air force, tire companies, commercial software developers, and academia. The workshop objectives were to assess the state of technology in the computational modeling of tires and to provide guidelines for future research.

  19. Application of 3-dimensional printing in hand surgery for production of a novel bone reduction clamp.

    PubMed

    Fuller, Sam M; Butz, Daniel R; Vevang, Curt B; Makhlouf, Mansour V

    2014-09-01

    Three-dimensional printing is being rapidly incorporated in the medical field to produce external prosthetics for improved cosmesis and fabricated molds to aid in presurgical planning. Biomedically engineered products from 3-dimensional printers are also utilized as implantable devices for knee arthroplasty, airway orthoses, and other surgical procedures. Although at first expensive and conceptually difficult to construct, 3-dimensional printing is now becoming more affordable and widely accessible. In hand surgery, like many other specialties, new or customized instruments would be desirable; however, the overall production cost restricts their development. We are presenting our step-by-step experience in creating a bone reduction clamp for finger fractures using 3-dimensional printing technology. Using free, downloadable software, a 3-dimensional model of a bone reduction clamp for hand fractures was created based on the senior author's (M.V.M.) specific design, previous experience, and preferences for fracture fixation. Once deemed satisfactory, the computer files were sent to a 3-dimensional printing company for the production of the prototypes. Multiple plastic prototypes were made and adjusted, affording a fast, low-cost working model of the proposed clamp. Once a workable design was obtained, a printing company produced the surgical clamp prototype directly from the 3-dimensional model represented in the computer files. This prototype was used in the operating room, meeting the expectations of the surgeon. Three-dimensional printing is affordable and offers the benefits of reducing production time and nurturing innovations in hand surgery. This article presents a step-by-step description of our design process using online software programs and 3-dimensional printing services. As medical technology advances, it is important that hand surgeons remain aware of available resources, are knowledgeable about how the process works, and are able to take advantage of

  20. Cardiothoracic Applications of 3-dimensional Printing.

    PubMed

    Giannopoulos, Andreas A; Steigner, Michael L; George, Elizabeth; Barile, Maria; Hunsaker, Andetta R; Rybicki, Frank J; Mitsouras, Dimitris

    2016-09-01

    Medical 3-dimensional (3D) printing is emerging as a clinically relevant imaging tool in directing preoperative and intraoperative planning in many surgical specialties and will therefore likely lead to interdisciplinary collaboration between engineers, radiologists, and surgeons. Data from standard imaging modalities such as computed tomography, magnetic resonance imaging, echocardiography, and rotational angiography can be used to fabricate life-sized models of human anatomy and pathology, as well as patient-specific implants and surgical guides. Cardiovascular 3D-printed models can improve diagnosis and allow for advanced preoperative planning. The majority of applications reported involve congenital heart diseases and valvular and great vessels pathologies. Printed models are suitable for planning both surgical and minimally invasive procedures. Added value has been reported toward improving outcomes, minimizing perioperative risk, and developing new procedures such as transcatheter mitral valve replacements. Similarly, thoracic surgeons are using 3D printing to assess invasion of vital structures by tumors and to assist in diagnosis and treatment of upper and lower airway diseases. Anatomic models enable surgeons to assimilate information more quickly than image review, choose the optimal surgical approach, and achieve surgery in a shorter time. Patient-specific 3D-printed implants are beginning to appear and may have significant impact on cosmetic and life-saving procedures in the future. In summary, cardiothoracic 3D printing is rapidly evolving and may be a potential game-changer for surgeons. The imager who is equipped with the tools to apply this new imaging science to cardiothoracic care is thus ideally positioned to innovate in this new emerging imaging modality.

  1. Computer Models of Proteins

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Dr. Marc Pusey (seated) and Dr. Craig Kundrot use computers to analyze x-ray maps and generate three-dimensional models of protein structures. With this information, scientists at Marshall Space Flight Center can learn how proteins are made and how they work. The computer screen depicts a proten structure as a ball-and-stick model. Other models depict the actual volume occupied by the atoms, or the ribbon-like structures that are crucial to a protein's function.

  2. General design method for 3-dimensional, potential flow fields. Part 2: Computer program DIN3D1 for simple, unbranched ducts

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1985-01-01

    The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.

  3. Evaluation of the middle cerebral artery occlusion techniques in the rat by in-vitro 3-dimensional micro- and nano computed tomography

    PubMed Central

    2010-01-01

    Background Animal models of focal cerebral ischemia are widely used in stroke research. The purpose of our study was to evaluate and compare the cerebral macro- and microvascular architecture of rats in two different models of permanent middle cerebral artery occlusion using an innovative quantitative micro- and nano-CT imaging technique. Methods 4h of middle cerebral artery occlusion was performed in rats using the macrosphere method or the suture technique. After contrast perfusion, brains were isolated and scanned en-bloc using micro-CT (8 μm)3 or nano-CT at 500 nm3 voxel size to generate 3D images of the cerebral vasculature. The arterial vascular volume fraction and gray scale attenuation was determined and the significance of differences in measurements was tested with analysis of variance [ANOVA]. Results Micro-CT provided quantitative information on vascular morphology. Micro- and nano-CT proved to visualize and differentiate vascular occlusion territories performed in both models of cerebral ischemia. The suture technique leads to a remarkable decrease in the intravascular volume fraction of the middle cerebral artery perfusion territory. Blocking the medial cerebral artery with macrospheres, the vascular volume fraction of the involved hemisphere decreased significantly (p < 0.001), independently of the number of macrospheres, and was comparable to the suture method. We established gray scale measurements by which focal cerebral ischemia could be radiographically categorized (p < 0.001). Nano-CT imaging demonstrates collateral perfusion related to different occluded vessel territories after macrosphere perfusion. Conclusion Micro- and Nano-CT imaging is feasible for analysis and differentiation of different models of focal cerebral ischemia in rats. PMID:20509884

  4. Computer Modeling and Simulation

    SciTech Connect

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  5. CMS computing model evolution

    NASA Astrophysics Data System (ADS)

    Grandi, C.; Bonacorsi, D.; Colling, D.; Fisk, I.; Girone, M.

    2014-06-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  6. Computationally modeling interpersonal trust

    PubMed Central

    Lee, Jin Joo; Knox, W. Bradley; Wormwood, Jolie B.; Breazeal, Cynthia; DeSteno, David

    2013-01-01

    We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust. PMID:24363649

  7. A thermodynamic and mechanical model for formation of the Solar System via 3-dimensional collapse of the dusty pre-solar nebula

    NASA Astrophysics Data System (ADS)

    Hofmeister, Anne M.; Criss, Robert E.

    2012-03-01

    The fundamental and shared rotational characteristics of the Solar System (nearly circular, co-planar orbits and mostly upright axial spins of the planets) record conditions of origin, yet are not explained by prevailing 2-dimensional disk models. Current planetary spin and orbital rotational energies (R.E.) each nearly equal and linearly depend on gravitational self-potential of formation (Ug), revealing mechanical energy conservation. We derive -ΔUg≅Δ.R.E. and stability criteria from thermodynamic principles, and parlay these relationships into a detailed model of simultaneous accretion of the protoSun and planets from the dust-bearing 3-d pre-solar nebula (PSN). Gravitational heating is insignificant because Ug is negative, the 2nd law of thermodynamics must be fulfilled, and ideal gas conditions pertain to the rarified PSN until the objects were nearly fully formed. Combined conservation of angular momentum and mechanical energy during 3-dimensional collapse of spheroidal dust shells in a contracting nebula provides ΔR.E.≅R.E. for the central body, whereas for formation of orbiting bodies, ΔR.E.≅R.E.f(1-If/Ii), where I is the moment of inertia. Orbital data for the inner planets follow 0.04×R.E.f≅-Ug which confirms conservation of angular momentum. Significant loss of spin, attributed to viscous dissipation during differential rotation, masks the initial spin of the un-ignited protoSun predicted by R.E.=-Ug. Heat production occurs after nearly final sizes are reached via mechanisms such as shear during differential rotation and radioactivity. We focus on the dilute stage, showing that the PSN was compositionally graded due to light molecules diffusing preferentially, providing the observed planetary chemistry, and set limits on PSN mass, density, and temperature. From measured planetary masses and orbital characteristics, accounting for dissipation of spin, we deduce mechanisms and the sequence of converting a 3-d dusty cloud to the present 2-d

  8. [3-dimensional documentation of wound-healing].

    PubMed

    Körber, A; Grabbe, S; Dissemond, J

    2006-04-01

    The objective evaluation of the course of wound-healing represents a substantial parameter for the quality assurance of a modern wound management in chronic wounds. Established procedures exclusively based on a two-dimensional measurement of the wound surface with planimetry or digital photo documentation in combination with a metric statement of size. Thus so far an objective method is missing for the evaluation of the volumes of chronic wounds. By the linkage of digital photography, optical grid by means of digital scanner and an image processing software in co-operation with the company RSI we were able to do an accurate 3-dimensional documentation of chronic wounds (DigiSkin). The generated scatter-plots allow a visual, computer-assisted 3-dimensional measurement and documentation of chronic wounds. In comparison with available systems it is now possible for the first time to objectify the volume changes of a chronic wound. On the basis of a case report of a female patient with an venous leg ulcer, which has been treated with a vacuum closure therapy before and after performing a mesh-graft transplantation, we would like to describe the advantages and the resulting scientific use of this new, objective wound documentation system in the clinical employment. PMID:16575675

  9. Computer Modeling Of Atomization

    NASA Technical Reports Server (NTRS)

    Giridharan, M.; Ibrahim, E.; Przekwas, A.; Cheuch, S.; Krishnan, A.; Yang, H.; Lee, J.

    1994-01-01

    Improved mathematical models based on fundamental principles of conservation of mass, energy, and momentum developed for use in computer simulation of atomization of jets of liquid fuel in rocket engines. Models also used to study atomization in terrestrial applications; prove especially useful in designing improved industrial sprays - humidifier water sprays, chemical process sprays, and sprays of molten metal. Because present improved mathematical models based on first principles, they are minimally dependent on empirical correlations and better able to represent hot-flow conditions that prevail in rocket engines and are too severe to be accessible for detailed experimentation.

  10. Understanding student computational thinking with computational modeling

    NASA Astrophysics Data System (ADS)

    Aiken, John M.; Caballero, Marcos D.; Douglas, Scott S.; Burk, John B.; Scanlon, Erin M.; Thoms, Brian D.; Schatz, Michael F.

    2013-01-01

    Recently, the National Research Council's framework for next generation science standards highlighted "computational thinking" as one of its "fundamental practices". 9th Grade students taking a physics course that employed the Arizona State University's Modeling Instruction curriculum were taught to construct computational models of physical systems. Student computational thinking was assessed using a proctored programming assignment, written essay, and a series of think-aloud interviews, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Roughly a third of the students in the study were successful in completing the programming assignment. Student success on this assessment was tied to how students synthesized their knowledge of physics and computation. On the essay and interview assessments, students displayed unique views of the relationship between force and motion; those who spoke of this relationship in causal (rather than observational) terms tended to have more success in the programming exercise.

  11. Computer modeling of polymers

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1988-01-01

    A Polymer Molecular Analysis Display System (p-MADS) was developed for computer modeling of polymers. This method of modeling allows for the theoretical calculation of molecular properties such as equilibrium geometries, conformational energies, heats of formations, crystal packing arrangements, and other properties. Furthermore, p-MADS has the following capabilities: constructing molecules from internal coordinates (bonds length, angles, and dihedral angles), Cartesian coordinates (such as X-ray structures), or from stick drawings; manipulating molecules using graphics and making hard copy representation of the molecules on a graphics printer; and performing geometry optimization calculations on molecules using the methods of molecular mechanics or molecular orbital theory.

  12. MIRO Computational Model

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2010-01-01

    A computational model calculates the excitation of water rotational levels and emission-line spectra in a cometary coma with applications for the Micro-wave Instrument for Rosetta Orbiter (MIRO). MIRO is a millimeter-submillimeter spectrometer that will be used to study the nature of cometary nuclei, the physical processes of outgassing, and the formation of the head region of a comet (coma). The computational model is a means to interpret the data measured by MIRO. The model is based on the accelerated Monte Carlo method, which performs a random angular, spatial, and frequency sampling of the radiation field to calculate the local average intensity of the field. With the model, the water rotational level populations in the cometary coma and the line profiles for the emission from the water molecules as a function of cometary parameters (such as outgassing rate, gas temperature, and gas and electron density) and observation parameters (such as distance to the comet and beam width) are calculated.

  13. The Antares computing model

    NASA Astrophysics Data System (ADS)

    Kopper, Claudio; Antares Collaboration

    2013-10-01

    Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.

  14. Computer modeling and CCPS

    SciTech Connect

    Apul, D.; Gardner, K.; Eighmy, T.

    2005-09-30

    Computer modeling can be an instructive tool to evaluate potential environmental impacts of coal combustion byproducts and other secondary materials used in road and embankment construction. Results from the HYDRUS2D model coupled with an uncertainty analysis suggest that the cadmium fluxes will be significantly less than the output from simpler models with worst case scenarios. Two dimensional analysis of the leaching from the base layer also suggest that concentrations leaching ground water will not be significant for metals unless the pavement is completely damaged and built on sandy soils. Development and verification of these types of tools may lead the way to more informed decision with respect to beneficial use of coal combustion byproducts and other secondary materials. 5 figs., 1 tab.

  15. 3-Dimensional simulation of the grain formation in investment castings

    SciTech Connect

    Gandin, C.A.; Rappaz, M. ); Tintillier, R. . Dept. Materiaux et Procedes-Direction Technique)

    1994-03-01

    A 3-dimensional (3-D) probabilistic model which has been developed previously for the prediction of grain structure formation during solidification is applied to thin superalloy plates produced using the investment-casting process. This model considers the random nucleation and orientation of nuclei formed at the mold surface and in the bulk of the liquid, the growth kinetics of the dendrite tips, and the preferential growth directions of the dendrite trunks and arms. In the present study, the grains are assumed to nucleate at the surface of the mold only. The computed grain structures, as observed in 2-dimensional (2-D) sections made parallel to the mold surface, are compared with experimental micrographs. The grain densities are then deduced as a function of the distance from the mold surface for both the experiment and the simulation. It is shown that these values are in good agreement, thus, providing validation of the grain formation mechanisms built into the 3-D probabilistic model. Finally, this model is further extended to more complex geometries and the 3-D computed grain structure of an equiaxed turbine-blade airfoil is compared with the experimental transverse section micrograph.

  16. Computational modelling of polymers

    NASA Technical Reports Server (NTRS)

    Celarier, Edward A.

    1991-01-01

    Polymeric materials and polymer/graphite composites show a very diverse range of material properties, many of which make them attractive candidates for a variety of high performance engineering applications. Their properties are ultimately determined largely by their chemical structure, and the conditions under which they are processed. It is the aim of computational chemistry to be able to simulate candidate polymers on a computer, and determine what their likely material properties will be. A number of commercially available software packages purport to predict the material properties of samples, given the chemical structures of their constituent molecules. One such system, Cerius, has been in use at LaRC. It is comprised of a number of modules, each of which performs a different kind of calculation on a molecule in the programs workspace. Particularly, interest is in evaluating the suitability of this program to aid in the study of microcrystalline polymeric materials. One of the first model systems examined was benzophenone. The results of this investigation are discussed.

  17. 3-dimensional bioprinting for tissue engineering applications.

    PubMed

    Gu, Bon Kang; Choi, Dong Jin; Park, Sang Jun; Kim, Min Sup; Kang, Chang Mo; Kim, Chun-Ho

    2016-01-01

    The 3-dimensional (3D) printing technologies, referred to as additive manufacturing (AM) or rapid prototyping (RP), have acquired reputation over the past few years for art, architectural modeling, lightweight machines, and tissue engineering applications. Among these applications, tissue engineering field using 3D printing has attracted the attention from many researchers. 3D bioprinting has an advantage in the manufacture of a scaffold for tissue engineering applications, because of rapid-fabrication, high-precision, and customized-production, etc. In this review, we will introduce the principles and the current state of the 3D bioprinting methods. Focusing on some of studies that are being current application for biomedical and tissue engineering fields using printed 3D scaffolds.

  18. 3-dimensional bioprinting for tissue engineering applications.

    PubMed

    Gu, Bon Kang; Choi, Dong Jin; Park, Sang Jun; Kim, Min Sup; Kang, Chang Mo; Kim, Chun-Ho

    2016-01-01

    The 3-dimensional (3D) printing technologies, referred to as additive manufacturing (AM) or rapid prototyping (RP), have acquired reputation over the past few years for art, architectural modeling, lightweight machines, and tissue engineering applications. Among these applications, tissue engineering field using 3D printing has attracted the attention from many researchers. 3D bioprinting has an advantage in the manufacture of a scaffold for tissue engineering applications, because of rapid-fabrication, high-precision, and customized-production, etc. In this review, we will introduce the principles and the current state of the 3D bioprinting methods. Focusing on some of studies that are being current application for biomedical and tissue engineering fields using printed 3D scaffolds. PMID:27114828

  19. On AGV's navigation in 3-dimensional space

    NASA Astrophysics Data System (ADS)

    Kusche, Jürgen

    1996-01-01

    This paper deals with position estimation and path control for Autonomous Guided Vehicles (AGV). To enable a vehicle or a mobile robot in following a continuous “virtual” path without human control, these techniques play an important role. The relationship between the vehicle's motion in 3-dimensional space and the shape of a curved surface is described. In particular, the introduction of a digital terrain model in dead reckoning is considered. Moreover, a possible nonlinear control is developed based on curvilinear path coordinates, and the proof for global stability is given. To achieve general validity, these topics are treated here independently of the cart's special mechanization (the configuration of steered wheels and driven wheels). Simulation studies are presented to illustrate the investigations.

  20. Teleportation of a 3-dimensional GHZ State

    NASA Astrophysics Data System (ADS)

    Cao, Hai-Jing; Wang, Huai-Sheng; Li, Peng-Fei; Song, He-Shan

    2012-05-01

    The process of teleportation of a completely unknown 3-dimensional GHZ state is considered. Three maximally entangled 3-dimensional Bell states function as quantum channel in the scheme. This teleportation scheme can be directly generalized to teleport an unknown d-dimensional GHZ state.

  1. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This document contains presentations given at Workshop on Computational Turbulence Modeling held 15-16 Sep. 1993. The purpose of the meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Papers cover the following topics: turbulence modeling activities at the Center for Modeling of Turbulence and Transition (CMOTT); heat transfer and turbomachinery flow physics; aerothermochemistry and computational methods for space systems; computational fluid dynamics and the k-epsilon turbulence model; propulsion systems; and inlet, duct, and nozzle flow.

  2. Design of 3-dimensional complex airplane configurations with specified pressure distribution via optimization

    NASA Technical Reports Server (NTRS)

    Kubrynski, Krzysztof

    1991-01-01

    A subcritical panel method applied to flow analysis and aerodynamic design of complex aircraft configurations is presented. The analysis method is based on linearized, compressible, subsonic flow equations and indirect Dirichlet boundary conditions. Quadratic dipol and linear source distribution on flat panels are applied. In the case of aerodynamic design, the geometry which minimizes differences between design and actual pressure distribution is found iteratively, using numerical optimization technique. Geometry modifications are modeled by surface transpiration concept. Constraints in respect to resulting geometry can be specified. A number of complex 3-dimensional design examples are presented. The software is adopted to personal computers, and as result an unexpected low cost of computations is obtained.

  3. The ATLAS computing model & distributed computing evolution

    NASA Astrophysics Data System (ADS)

    Jones, Roger W. L.; Atlas Collaboration

    2012-12-01

    Despite only a brief availability of beam-related data, the typical usage patterns and operational requirements of the ATLAS computing model have been exercised, and the model as originally constructed remains remarkably unchanged. Resource requirements have been revised, and cosmic ray running has exercised much of the model in both duration and volume. The operational model has been adapted in several ways to increase performance and meet the asdelivered functionality of the available middleware. There are also changes reflecting the emerging roles of the different data formats. The model continues to evolve with a heightened focus on end-user performance; the key tools developed in the operational system are outlined, with an emphasis on those under recent development.

  4. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  5. Improving Perceptual Skills with 3-Dimensional Animations.

    ERIC Educational Resources Information Center

    Johns, Janet Faye; Brander, Julianne Marie

    1998-01-01

    Describes three-dimensional computer aided design (CAD) models for every component in a representative mechanical system; the CAD models made it easy to generate 3-D animations that are ideal for teaching perceptual skills in multimedia computer-based technical training. Fifteen illustrations are provided. (AEF)

  6. 3DHYDROGEOCHEM: A 3-DIMENSIONAL MODEL OF DENSITY-DEPENDENT SUBSURFACE FLOW AND THERMAL MULTISPECIES-MULTICOMPONENT HYDROGEOCHEMICAL TRANSPORT (EPA/600/SR-98/159)

    EPA Science Inventory

    This report presents a three-dimensional finite-element numerical model designed to simulate chemical transport in subsurface systems with temperature effect taken into account. The three-dimensional model is developed to provide (1) a tool of application, with which one is able ...

  7. Phase diagram of quark-antiquark and diquark condensates in the 3-dimensional Gross-Neveu model with the 4-component spinor representation

    SciTech Connect

    Kohyama, Hiroaki

    2008-07-01

    We construct the phase diagram of the quark-antiquark and diquark condensates at finite temperature and density in the 2+1 dimensional (3D) two flavor massless Gross-Neveu (GN) model with the 4-component quarks. In contrast to the case of the 2-component quarks, there appears the coexisting phase of the quark-antiquark and diquark condensates. This is the crucial difference between the 2-component and 4-component quark cases in the 3D GN model. The coexisting phase is also seen in the 4D Nambu Jona-Lasinio model. Then we see that the 3D GN model with the 4-component quarks bears closer resemblance to the 4D Nambu Jona-Lasinio model.

  8. [Posterior glass fiber-reinforced composite resin-bonded fixed partial dentures: A 3-dimensional modeling and finite element numerical analysis].

    PubMed

    Han, Jingyun; Fei, Renyuan; Li, Yansheng; Zhang, Lei

    2006-08-01

    The method of modeling and mesh generation about 3-unit tooth/restoration complex were established. The three-dimensional finite element models were subjected to four types of occlusal load applied to pontic element to evaluate 3 fiber framework designs and 3 cavities preparation configurations. By comparing the difference of stress distribution, following conclusions were obtained: the principal stress under buccal-lingual cusp load in traditional fiber framework pontic increased by 6.22% compared to that in pure composite resin pontic; optimized fiber framework obviously reduced stress level under any load; modified cavities exhibited better stress transfer and decreased share stress at adhesive interface than traditional cavities. PMID:17002101

  9. A 3-dimensional micro- and nanoparticle transport and filtration model (MNM3D) applied to the migration of carbon-based nanomaterials in porous media

    NASA Astrophysics Data System (ADS)

    Bianco, Carlo; Tosco, Tiziana; Sethi, Rajandrea

    2016-10-01

    Engineered nanoparticles (NPs) in the environment can act both as contaminants, when they are unintentionally released, and as remediation agents when injected on purpose at contaminated sites. In this work two carbon-based NPs are considered, namely CARBO-IRON®, a new material developed for contaminated site remediation, and single layer graphene oxide (SLGO), a potential contaminant of the next future. Understanding and modeling the transport and deposition of such NPs in aquifer systems is a key aspect in both cases, and numerical models capable to simulate NP transport in groundwater in complex 3D scenarios are necessary. To this aim, this work proposes a modeling approach based on modified advection-dispersion-deposition equations accounting for the coupled influence of flow velocity and ionic strength on particle transport. A new modeling tool (MNM3D - Micro and Nanoparticle transport Model in 3D geometries) is presented for the simulation of NPs injection and transport in 3D scenarios. MNM3D is the result of the integration of the numerical code MNMs (Micro and Nanoparticle transport, filtration and clogging Model - Suite) in the well-known transport model RT3D (Clement et al., 1998). The injection in field-like conditions of CARBO-IRON® (20 g/l) amended by CMC (4 g/l) in a 2D vertical tank (0.7 × 1.0 × 0.12 m) was simulated using MNM3D, and compared to experimental results under the same conditions. Column transport tests of SLGO at a concentration (10 mg/l) representative of a possible spill of SLGO-containing waste water were performed at different values of ionic strength (0.1 to 35 mM), evidencing a strong dependence of SLGO transport on IS, and a reversible blocking deposition. The experimental data were fitted using the numerical code MNMs and the ionic strength-dependent transport was up-scaled for a full scale 3D simulation of SLGO release and long-term transport in a heterogeneous aquifer. MNM3D showed to potentially represent a valid tool for

  10. Computational Models for Neuromuscular Function

    PubMed Central

    Valero-Cuevas, Francisco J.; Hoffmann, Heiko; Kurse, Manish U.; Kutch, Jason J.; Theodorou, Evangelos A.

    2011-01-01

    Computational models of the neuromuscular system hold the potential to allow us to reach a deeper understanding of neuromuscular function and clinical rehabilitation by complementing experimentation. By serving as a means to distill and explore specific hypotheses, computational models emerge from prior experimental data and motivate future experimental work. Here we review computational tools used to understand neuromuscular function including musculoskeletal modeling, machine learning, control theory, and statistical model analysis. We conclude that these tools, when used in combination, have the potential to further our understanding of neuromuscular function by serving as a rigorous means to test scientific hypotheses in ways that complement and leverage experimental data. PMID:21687779

  11. The 3-Dimensional Structure of Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    King, Lindsay

    NASA's Hubble Space Telescope Multi-Cycle Treasury Program CLASH (PI Postman) has provided the community with the most detailed views ever of the central regions of massive galaxy clusters. These galaxy clusters have also been observed with NASA's Chandra X-Ray Observatory, with the ground-based Subaru telescope, and with other ground- and space-based facilities, resulting in unprecedented multi-wavelength data sets of the most massive bound structures in the universe. Fitting 3-Dimensional mass models is crucial to understanding how mass is distributed in individual clusters, investigating the properties of dark matter, and testing our cosmological model. With the exquisite data available, the time is now ideal to undertake this analysis. We propose to use algorithms that we have developed and obtain mass models for the clusters from the CLASH sample. The project would use archival gravitational lensing data, X-ray data of the cluster's hot gas and additional constraints from Sunyaev-Zel'dovich (SZ) data. Specifically, we would model the 23 clusters for which both HST and Subaru data (or in one case WFI data) are publicly available, since the exquisite imaging of HST in the clusters' central regions is beautifully augmented by the wide field coverage of Subaru imaging. If the true 3-D shapes of clusters are not properly accounted for when analysing data, this can lead to inaccuracies in the mass density profiles of individual clusters - up to 50% bias in mass for the most highly triaxial systems. Our proposed project represents an independent analysis of the CLASH sample, complementary to that of the CLASH team, probing the triaxial shapes and orientations of the cluster dark matter halos and hot gas. Our findings will be relevant to the analysis of data from future missions such as JWST and Euclid, and also to ground-based surveys to be made with telescopes such as LSST.

  12. The Spatiotemporal Stability of Dominant Frequency Sites in In-Silico Modeling of 3-Dimensional Left Atrial Mapping of Atrial Fibrillation

    PubMed Central

    Hwang, Minki; Song, Jun-Seop; Lee, Young-Seon; Joung, Boyoung; Pak, Hui-Nam

    2016-01-01

    Background We previously reported that stable rotors were observed in in-silico human atrial fibrillation (AF) models, and were well represented by dominant frequency (DF). We explored the spatiotemporal stability of DF sites in 3D-AF models imported from patient CT images of the left atrium (LA). Methods We integrated 3-D CT images of the LA obtained from ten patients with persistent AF (male 80%, 61.8 ± 13.5 years old) into an in-silico AF model. After induction, we obtained 6 seconds of AF simulation data for DF analyses in 30 second intervals (T1–T9). The LA was divided into ten sections. Spatiotemporal changes and variations in the temporal consistency of DF were evaluated at each section of the LA. The high DF area was defined as the area with the highest 10% DF. Results 1. There was no spatial consistency in the high DF distribution at each LA section during T1–T9 except in one patient (p = 0.027). 2. Coefficients of variation for the high DF area were highly different among the ten LA sections (p < 0.001), and they were significantly higher in the four pulmonary vein (PV) areas, the LA appendage, and the peri-mitral area than in the other LA sections (p < 0.001). 3. When we conducted virtual ablation of 10%, 15%, and 20% of the highest DF areas (n = 270 cases), AF was changed to atrial tachycardia (AT) or terminated at a rate of 40%, 57%, and 76%, respectively. Conclusions Spatiotemporal consistency of the DF area was observed in 10% of AF patients, and high DF areas were temporally variable. Virtual ablation of DF is moderately effective in AF termination and AF changing into AT. PMID:27459377

  13. Computer-Aided Geometry Modeling

    NASA Technical Reports Server (NTRS)

    Shoosmith, J. N. (Compiler); Fulton, R. E. (Compiler)

    1984-01-01

    Techniques in computer-aided geometry modeling and their application are addressed. Mathematical modeling, solid geometry models, management of geometric data, development of geometry standards, and interactive and graphic procedures are discussed. The applications include aeronautical and aerospace structures design, fluid flow modeling, and gas turbine design.

  14. Wear Particles Derived from Metal Hip Implants Induce the Generation of Multinucleated Giant Cells in a 3-Dimensional Peripheral Tissue-Equivalent Model

    PubMed Central

    Dutta, Debargh K.; Potnis, Pushya A.; Rhodes, Kelly; Wood, Steven C.

    2015-01-01

    Multinucleate giant cells (MGCs) are formed by the fusion of 5 to 15 monocytes or macrophages. MGCs can be generated by hip implants at the site where the metal surface of the device is in close contact with tissue. MGCs play a critical role in the inflammatory processes associated with adverse events such as aseptic loosening of the prosthetic joints and bone degeneration process called osteolysis. Upon interaction with metal wear particles, endothelial cells upregulate pro-inflammatory cytokines and other factors that enhance a localized immune response. However, the role of endothelial cells in the generation of MGCs has not been completely investigated. We developed a three-dimensional peripheral tissue-equivalent model (PTE) consisting of collagen gel, supporting a monolayer of endothelial cells and human peripheral blood mononuclear cells (PBMCs) on top, which mimics peripheral tissue under normal physiological conditions. The cultures were incubated for 14 days with Cobalt chromium alloy (CoCr ASTM F75, 1–5 micron) wear particles. PBMC were allowed to transit the endothelium and harvested cells were analyzed for MGC generation via flow cytometry. An increase in forward scatter (cell size) and in the propidium iodide (PI) uptake (DNA intercalating dye) was used to identify MGCs. Our results show that endothelial cells induce the generation of MGCs to a level 4 fold higher in 3-dimentional PTE system as compared to traditional 2-dimensional culture plates. Further characterization of MGCs showed upregulated expression of tartrate resistant alkaline phosphatase (TRAP) and dendritic cell specific transmembrane protein, (DC-STAMP), which are markers of bone degrading cells called osteoclasts. In sum, we have established a robust and relevant model to examine MGC and osteoclast formation in a tissue like environment using flow cytometry and RT-PCR. With endothelial cells help, we observed a consistent generation of metal wear particle- induced MGCs, which

  15. Adiabatic computation: A toy model

    NASA Astrophysics Data System (ADS)

    Ribeiro, Pedro; Mosseri, Rémy

    2006-10-01

    We discuss a toy model for adiabatic quantum computation which displays some phenomenological properties expected in more realistic implementations. This model has two free parameters: the adiabatic evolution parameter s and the α parameter, which emulates many-variable constraints in the classical computational problem. The proposed model presents, in the s-α plane, a line of first-order quantum phase transition that ends at a second-order point. The relation between computation complexity and the occurrence of quantum phase transitions is discussed. We analyze the behavior of the ground and first excited states near the quantum phase transition, the gap, and the entanglement content of the ground state.

  16. Adiabatic computation: A toy model

    SciTech Connect

    Ribeiro, Pedro; Mosseri, Remy

    2006-10-15

    We discuss a toy model for adiabatic quantum computation which displays some phenomenological properties expected in more realistic implementations. This model has two free parameters: the adiabatic evolution parameter s and the {alpha} parameter, which emulates many-variable constraints in the classical computational problem. The proposed model presents, in the s-{alpha} plane, a line of first-order quantum phase transition that ends at a second-order point. The relation between computation complexity and the occurrence of quantum phase transitions is discussed. We analyze the behavior of the ground and first excited states near the quantum phase transition, the gap, and the entanglement content of the ground state.

  17. Computational models of syntactic acquisition.

    PubMed

    Yang, Charles

    2012-03-01

    The computational approach to syntactic acquisition can be fruitfully pursued by integrating results and perspectives from computer science, linguistics, and developmental psychology. In this article, we first review some key results in computational learning theory and their implications for language acquisition. We then turn to examine specific learning models, some of which exploit distributional information in the input while others rely on a constrained space of hypotheses, yet both approaches share a common set of characteristics to overcome the learning problem. We conclude with a discussion of how computational models connects with the empirical study of child grammar, making the case for computationally tractable, psychologically plausible and developmentally realistic models of acquisition. WIREs Cogn Sci 2012, 3:205-213. doi: 10.1002/wcs.1154 For further resources related to this article, please visit the WIREs website.

  18. toolkit computational mesh conceptual model.

    SciTech Connect

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  19. COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS

    SciTech Connect

    Mathur, M.P.; Freeman, Mark; Gera, Dinesh

    2001-11-06

    In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.

  20. 3-dimensional imaging at nanometer resolutions

    DOEpatents

    Werner, James H.; Goodwin, Peter M.; Shreve, Andrew P.

    2010-03-09

    An apparatus and method for enabling precise, 3-dimensional, photoactivation localization microscopy (PALM) using selective, two-photon activation of fluorophores in a single z-slice of a sample in cooperation with time-gated imaging for reducing the background radiation from other image planes to levels suitable for single-molecule detection and spatial location, are described.

  1. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wide-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid 2-6 semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  2. Computational modeling of properties

    NASA Technical Reports Server (NTRS)

    Franz, Judy R.

    1994-01-01

    A simple model was developed to calculate the electronic transport parameters in disordered semiconductors in strong scattered regime. The calculation is based on a Green function solution to Kubo equation for the energy-dependent conductivity. This solution together with a rigorous calculation of the temperature-dependent chemical potential allows the determination of the dc conductivity and the thermopower. For wise-gap semiconductors with single defect bands, these transport properties are investigated as a function of defect concentration, defect energy, Fermi level, and temperature. Under certain conditions the calculated conductivity is quite similar to the measured conductivity in liquid II-VI semiconductors in that two distinct temperature regimes are found. Under different conditions the conductivity is found to decrease with temperature; this result agrees with measurements in amorphous Si. Finally the calculated thermopower can be positive or negative and may change sign with temperature or defect concentration.

  3. Trust models in ubiquitous computing.

    PubMed

    Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro

    2008-10-28

    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.

  4. Ch. 33 Modeling: Computational Thermodynamics

    SciTech Connect

    Besmann, Theodore M

    2012-01-01

    This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.

  5. Computational Modeling of Multiphase Reactors.

    PubMed

    Joshi, J B; Nandakumar, K

    2015-01-01

    Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling.

  6. Computational Modeling of Multiphase Reactors.

    PubMed

    Joshi, J B; Nandakumar, K

    2015-01-01

    Multiphase reactors are very common in chemical industry, and numerous review articles exist that are focused on types of reactors, such as bubble columns, trickle beds, fluid catalytic beds, etc. Currently, there is a high degree of empiricism in the design process of such reactors owing to the complexity of coupled flow and reaction mechanisms. Hence, we focus on synthesizing recent advances in computational and experimental techniques that will enable future designs of such reactors in a more rational manner by exploring a large design space with high-fidelity models (computational fluid dynamics and computational chemistry models) that are validated with high-fidelity measurements (tomography and other detailed spatial measurements) to provide a high degree of rigor. Understanding the spatial distributions of dispersed phases and their interaction during scale up are key challenges that were traditionally addressed through pilot scale experiments, but now can be addressed through advanced modeling. PMID:26134737

  7. The 3-dimensional cellular automata for HIV infection

    NASA Astrophysics Data System (ADS)

    Mo, Youbin; Ren, Bin; Yang, Wencao; Shuai, Jianwei

    2014-04-01

    The HIV infection dynamics is discussed in detail with a 3-dimensional cellular automata model in this paper. The model can reproduce the three-phase development, i.e., the acute period, the asymptotic period and the AIDS period, observed in the HIV-infected patients in a clinic. We show that the 3D HIV model performs a better robustness on the model parameters than the 2D cellular automata. Furthermore, we reveal that the occurrence of a perpetual source to successively generate infectious waves to spread to the whole system drives the model from the asymptotic state to the AIDS state.

  8. Computational models of adult neurogenesis

    NASA Astrophysics Data System (ADS)

    Cecchi, Guillermo A.; Magnasco, Marcelo O.

    2005-10-01

    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of an adult brain. Here, we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning-driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas like the olfactory bulb and the dentate gyrus.

  9. 3-dimensional fabrication of soft energy harvesters

    NASA Astrophysics Data System (ADS)

    McKay, Thomas; Walters, Peter; Rossiter, Jonathan; O'Brien, Benjamin; Anderson, Iain

    2013-04-01

    Dielectric elastomer generators (DEG) provide an opportunity to harvest energy from low frequency and aperiodic sources. Because DEG are soft, deformable, high energy density generators, they can be coupled to complex structures such as the human body to harvest excess mechanical energy. However, DEG are typically constrained by a rigid frame and manufactured in a simple planar structure. This planar arrangement is unlikely to be optimal for harvesting from compliant and/or complex structures. In this paper we present a soft generator which is fabricated into a 3 Dimensional geometry. This capability will enable the 3-dimensional structure of a dielectric elastomer to be customised to the energy source, allowing efficient and/or non-invasive coupling. This paper demonstrates our first 3 dimensional generator which includes a diaphragm with a soft elastomer frame. When the generator was connected to a self-priming circuit and cyclically inflated, energy was accumulated in the system, demonstrated by an increased voltage. Our 3D generator promises a bright future for dielectric elastomers that will be customised for integration with complex and soft structures. In addition to customisable geometries, the 3D printing process may lend itself to fabricating large arrays of small generator units and for fabricating truly soft generators with excellent impedance matching to biological tissue. Thus comfortable, wearable energy harvesters are one step closer to reality.

  10. Biochemical Applications Of 3-Dimensional Fluorescence Spectrometry

    NASA Astrophysics Data System (ADS)

    Leiner, Marc J.; Wolfbeis, Otto S.

    1988-06-01

    We investigated the 3-dimensional fluorescence of complex mixtures of bioloquids such as human serum, serum ultrafiltrate, human urine, and human plasma low density lipoproteins. The total fluorescence of human serum can be divided into a few peaks. When comparing fluorescence topograms of sera, from normal and cancerous subjects, we found significant differences in tryptophan fluorescence. Although the total fluorescence of human urine can be resolved into 3-5 distinct peaks, some of them. do not result from single fluorescent urinary metabolites, but rather from. several species having similar spectral properties. Human plasma, low density lipoproteins possess a native fluorescence that changes when submitted to in-vitro autoxidation. The 3-dimensional fluorescence demonstrated the presence of 7 fluorophores in the lipid domain, and 6 fluorophores in the protein. dovain- The above results demonstrated that 3-dimensional fluorescence can resolve the spectral properties of complex ,lxtures much better than other methods. Moreover, other parameters than excitation and emission wavelength and intensity (for instance fluorescence lifetime, polarization, or quenchability) may be exploited to give a multidl,ensio,a1 matrix, that is unique for each sample. Consequently, 3-dimensio:Hhal fluorescence as such, or in combination with separation techniques is therefore considered to have the potential of becoming a useful new H.ethod in clinical chemistry and analytical biochemistry.

  11. Computational Modeling Method for Superalloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Gayda, John

    1997-01-01

    Computer modeling based on theoretical quantum techniques has been largely inefficient due to limitations on the methods or the computer needs associated with such calculations, thus perpetuating the notion that little help can be expected from computer simulations for the atomistic design of new materials. In a major effort to overcome these limitations and to provide a tool for efficiently assisting in the development of new alloys, we developed the BFS method for alloys, which together with the experimental results from previous and current research that validate its use for large-scale simulations, provide the ideal grounds for developing a computationally economical and physically sound procedure for supplementing the experimental work at great cost and time savings.

  12. Climate Modeling Computing Needs Assessment

    NASA Astrophysics Data System (ADS)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  13. Computational Modeling of Mitochondrial Function

    PubMed Central

    Cortassa, Sonia; Aon, Miguel A.

    2012-01-01

    The advent of techniques with the ability to scan massive changes in cellular makeup (genomics, proteomics, etc.) has revealed the compelling need for analytical methods to interpret and make sense of those changes. Computational models built on sound physico-chemical mechanistic basis are unavoidable at the time of integrating, interpreting, and simulating high-throughput experimental data. Another powerful role of computational models is predicting new behavior provided they are adequately validated. Mitochondrial energy transduction has been traditionally studied with thermodynamic models. More recently, kinetic or thermo-kinetic models have been proposed, leading the path toward an understanding of the control and regulation of mitochondrial energy metabolism and its interaction with cytoplasmic and other compartments. In this work, we outline the methods, step-by-step, that should be followed to build a computational model of mitochondrial energetics in isolation or integrated to a network of cellular processes. Depending on the question addressed by the modeler, the methodology explained herein can be applied with different levels of detail, from the mitochondrial energy producing machinery in a network of cellular processes to the dynamics of a single enzyme during its catalytic cycle. PMID:22057575

  14. Computational tools for protein modeling.

    PubMed

    Xu, D; Xu, Y; Uberbacher, E C

    2000-07-01

    Protein modeling is playing a more and more important role in protein and peptide sciences due to improvements in modeling methods, advances in computer technology, and the huge amount of biological data becoming available. Modeling tools can often predict the structure and shed some light on the function and its underlying mechanism. They can also provide insight to design experiments and suggest possible leads for drug design. This review attempts to provide a comprehensive introduction to major computer programs, especially on-line servers, for protein modeling. The review covers the following aspects: (1) protein sequence comparison, including sequence alignment/search, sequence-based protein family classification, domain parsing, and phylogenetic classification; (2) sequence annotation, including annotation/prediction of hydrophobic profiles, transmembrane regions, active sites, signaling sites, and secondary structures; (3) protein structure analysis, including visualization, geometry analysis, structure comparison/classification, dynamics, and electrostatics; (4) three-dimensional structure prediction, including homology modeling, fold recognition using threading, ab initio prediction, and docking. We will address what a user can expect from the computer tools in terms of their strengths and limitations. We will also discuss the major challenges and the future trends in the field. A collection of the links of tools can be found at http://compbio.ornl.gov/structure/resource/.

  15. Visualizing ultrasound through computational modeling

    NASA Technical Reports Server (NTRS)

    Guo, Theresa W.

    2004-01-01

    The Doppler Ultrasound Hematocrit Project (DHP) hopes to find non-invasive methods of determining a person s blood characteristics. Because of the limits of microgravity and the space travel environment, it is important to find non-invasive methods of evaluating the health of persons in space. Presently, there is no well developed method of determining blood composition non-invasively. This projects hopes to use ultrasound and Doppler signals to evaluate the characteristic of hematocrit, the percentage by volume of red blood cells within whole blood. These non-invasive techniques may also be developed to be used on earth for trauma patients where invasive measure might be detrimental. Computational modeling is a useful tool for collecting preliminary information and predictions for the laboratory research. We hope to find and develop a computer program that will be able to simulate the ultrasound signals the project will work with. Simulated models of test conditions will more easily show what might be expected from laboratory results thus help the research group make informed decisions before and during experimentation. There are several existing Matlab based computer programs available, designed to interpret and simulate ultrasound signals. These programs will be evaluated to find which is best suited for the project needs. The criteria of evaluation that will be used are 1) the program must be able to specify transducer properties and specify transmitting and receiving signals, 2) the program must be able to simulate ultrasound signals through different attenuating mediums, 3) the program must be able to process moving targets in order to simulate the Doppler effects that are associated with blood flow, 4) the program should be user friendly and adaptable to various models. After a computer program is chosen, two simulation models will be constructed. These models will simulate and interpret an RF data signal and a Doppler signal.

  16. Parallel computing in enterprise modeling.

    SciTech Connect

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  17. Cosmic logic: a computational model

    NASA Astrophysics Data System (ADS)

    Vanchurin, Vitaly

    2016-02-01

    We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.

  18. Fabrication of 3-dimensional multicellular microvascular structures

    PubMed Central

    Barreto-Ortiz, Sebastian F.; Fradkin, Jamie; Eoh, Joon; Trivero, Jacqueline; Davenport, Matthew; Ginn, Brian; Mao, Hai-Quan; Gerecht, Sharon

    2015-01-01

    Despite current advances in engineering blood vessels over 1 mm in diameter and the existing wealth of knowledge regarding capillary bed formation, studies for the development of microvasculature, the connecting bridge between them, have been extremely limited so far. Here, we evaluate the use of 3-dimensional (3D) microfibers fabricated by hydrogel electrospinning as templates for microvascular structure formation. We hypothesize that 3D microfibers improve extracellular matrix (ECM) deposition from vascular cells, enabling the formation of freestanding luminal multicellular microvasculature. Compared to 2-dimensional cultures, we demonstrate with confocal microscopy and RT-PCR that fibrin microfibers induce an increased ECM protein deposition by vascular cells, specifically endothelial colony-forming cells, pericytes, and vascular smooth muscle cells. These ECM proteins comprise different layers of the vascular wall including collagen types I, III, and IV, as well as elastin, fibronectin, and laminin. We further demonstrate the achievement of multicellular microvascular structures with an organized endothelium and a robust multicellular perivascular tunica media. This, along with the increased ECM deposition, allowed for the creation of self-supporting multilayered microvasculature with a distinct circular lumen following fibrin microfiber core removal. This approach presents an advancement toward the development of human microvasculature for basic and translational studies.—Barreto-Ortiz, S. F., Fradkin, J., Eoh, J., Trivero, J., Davenport, M., Ginn, B., Mao, H.-Q., Gerecht, S. Fabrication of 3-dimensional multicellular microvascular structures. PMID:25900808

  19. Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, A. (Compiler); Shih, T.-H. (Compiler); Povinelli, L. A. (Compiler)

    1994-01-01

    The purpose of this meeting was to discuss the current status and future development of turbulence modeling in computational fluid dynamics for aerospace propulsion systems. Various turbulence models have been developed and applied to different turbulent flows over the past several decades and it is becoming more and more urgent to assess their performance in various complex situations. In order to help users in selecting and implementing appropriate models in their engineering calculations, it is important to identify the capabilities as well as the deficiencies of these models. This also benefits turbulence modelers by permitting them to further improve upon the existing models. This workshop was designed for exchanging ideas and enhancing collaboration between different groups in the Lewis community who are using turbulence models in propulsion related CFD. In this respect this workshop will help the Lewis goal of excelling in propulsion related research. This meeting had seven sessions for presentations and one panel discussion over a period of two days. Each presentation session was assigned to one or two branches (or groups) to present their turbulence related research work. Each group was asked to address at least the following points: current status of turbulence model applications and developments in the research; progress and existing problems; and requests about turbulence modeling. The panel discussion session was designed for organizing committee members to answer management and technical questions from the audience and to make concluding remarks.

  20. MODEL IDENTIFICATION AND COMPUTER ALGEBRA

    PubMed Central

    Bollen, Kenneth A.; Bauldry, Shawn

    2011-01-01

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods. PMID:21769158

  1. MODEL IDENTIFICATION AND COMPUTER ALGEBRA.

    PubMed

    Bollen, Kenneth A; Bauldry, Shawn

    2010-10-01

    Multiequation models that contain observed or latent variables are common in the social sciences. To determine whether unique parameter values exist for such models, one needs to assess model identification. In practice analysts rely on empirical checks that evaluate the singularity of the information matrix evaluated at sample estimates of parameters. The discrepancy between estimates and population values, the limitations of numerical assessments of ranks, and the difference between local and global identification make this practice less than perfect. In this paper we outline how to use computer algebra systems (CAS) to determine the local and global identification of multiequation models with or without latent variables. We demonstrate a symbolic CAS approach to local identification and develop a CAS approach to obtain explicit algebraic solutions for each of the model parameters. We illustrate the procedures with several examples, including a new proof of the identification of a model for handling missing data using auxiliary variables. We present an identification procedure for Structural Equation Models that makes use of CAS and that is a useful complement to current methods.

  2. Los Alamos Center for Computer Security formal computer security model

    SciTech Connect

    Dreicer, J.S.; Hunteman, W.J.; Markin, J.T.

    1989-01-01

    This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The need to test and verify DOE computer security policy implementation first motivated this effort. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present formal mathematical models for computer security. The fundamental objective of computer security is to prevent the unauthorized and unaccountable access to a system. The inherent vulnerabilities of computer systems result in various threats from unauthorized access. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The model is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell and LaPadula abstract sets of objects and subjects. 6 refs.

  3. The method of geometrical comparison of 3-dimensional objects created from DICOM images.

    PubMed

    Gaweł, Dominik; Danielewicz, Kamil; Nowak, Michał

    2012-01-01

    This work presents a method of geometrical comparison of 3-dimensional objects created from DICOM images. The reconstruction of biological objects is realized with use of Simpleware commercial software. Then the 3D geometries are registered and the recognized shape differences are visualized using color map, indicating the change of the 3D geometry. Than the last, but most important step of the presented technology is performed. The model including the information about changes in compared geometries is translated into the PDF format. Such approach allows to present the final result on every desktop computer equipped with Adobe Reader. This PDF browser is free to use and gives the possibility to freely rotate, move and zoom the model. PMID:22744507

  4. Computational models and resource allocation for supercomputers

    NASA Technical Reports Server (NTRS)

    Mauney, Jon; Agrawal, Dharma P.; Harcourt, Edwin A.; Choe, Young K.; Kim, Sukil

    1989-01-01

    There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machine's resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments. Some case studies are presented, showing concrete computational models and the allocation strategies used.

  5. A Model Computer Literacy Course.

    ERIC Educational Resources Information Center

    Orndorff, Joseph

    Designed to address the varied computer skill levels of college students, this proposed computer literacy course would be modular in format, with modules tailored to address various levels of expertise and permit individualized instruction. An introductory module would present both the history and future of computers and computing, followed by an…

  6. A Computational Theory of Modelling

    NASA Astrophysics Data System (ADS)

    Rossberg, Axel G.

    2003-04-01

    A metatheory is developed which characterizes the relationship between a modelled system, which complies with some ``basic theory'', and a model, which does not, and yet reproduces important aspects of the modelled system. A model is represented by an (in a certain sense, s.b.) optimal algorithm which generates data that describe the model's state or evolution complying with a ``reduced theory''. Theories are represented by classes of (in a similar sense, s.b.) optimal algorithms that test if their input data comply with the theory. The metatheory does not prescribe the formalisms (data structure, language) to be used for the description of states or evolutions. Transitions to other formalisms and loss of accuracy, common to theory reduction, are explicitly accounted for. The basic assumption of the theory is that resources such as the code length (~ programming time) and the computation time for modelling and testing are costly, but the relative cost of each recourse is unknown. Thus, if there is an algorithm a for which there is no other algorithm b solving the same problem but using less of each recourse, then a is considered optimal. For tests (theories), the set X of wrongly admitted inputs is treated as another resource. It is assumed that X1 is cheaper than X2 when X1 ⊂ X2 (X1 ≠ X2). Depending on the problem, the algorithmic complexity of a reduced theory can be smaller or larger than that of the basic theory. The theory might help to distinguish actual properties of complex systems from mere mental constructs. An application to complex spatio-temporal patterns is discussed.

  7. Computational model for chromosomal instabilty

    NASA Astrophysics Data System (ADS)

    Zapperi, Stefano; Bertalan, Zsolt; Budrikis, Zoe; La Porta, Caterina

    2015-03-01

    Faithful segregation of genetic material during cell division requires alignment of the chromosomes between the spindle poles and attachment of their kinetochores to each of the poles. Failure of these complex dynamical processes leads to chromosomal instability (CIN), a characteristic feature of several diseases including cancer. While a multitude of biological factors regulating chromosome congression and bi-orientation have been identified, it is still unclear how they are integrated into a coherent picture. Here we address this issue by a three dimensional computational model of motor-driven chromosome congression and bi-orientation. Our model reveals that successful cell division requires control of the total number of microtubules: if this number is too small bi-orientation fails, while if it is too large not all the chromosomes are able to congress. The optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles. Our results shed new light on the origin of several pathological conditions related to chromosomal instability.

  8. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  9. Computational modeling of membrane proteins

    PubMed Central

    Leman, Julia Koehler; Ulmschneider, Martin B.; Gray, Jeffrey J.

    2014-01-01

    The determination of membrane protein (MP) structures has always trailed that of soluble proteins due to difficulties in their overexpression, reconstitution into membrane mimetics, and subsequent structure determination. The percentage of MP structures in the protein databank (PDB) has been at a constant 1-2% for the last decade. In contrast, over half of all drugs target MPs, only highlighting how little we understand about drug-specific effects in the human body. To reduce this gap, researchers have attempted to predict structural features of MPs even before the first structure was experimentally elucidated. In this review, we present current computational methods to predict MP structure, starting with secondary structure prediction, prediction of trans-membrane spans, and topology. Even though these methods generate reliable predictions, challenges such as predicting kinks or precise beginnings and ends of secondary structure elements are still waiting to be addressed. We describe recent developments in the prediction of 3D structures of both α-helical MPs as well as β-barrels using comparative modeling techniques, de novo methods, and molecular dynamics (MD) simulations. The increase of MP structures has (1) facilitated comparative modeling due to availability of more and better templates, and (2) improved the statistics for knowledge-based scoring functions. Moreover, de novo methods have benefitted from the use of correlated mutations as restraints. Finally, we outline current advances that will likely shape the field in the forthcoming decade. PMID:25355688

  10. Cupola Furnace Computer Process Model

    SciTech Connect

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  11. Studies of Cosmic Ray Modulation and Energetic Particle Propagation in Time-Dependent 3-Dimensional Heliospheric Magnetic Fields

    NASA Technical Reports Server (NTRS)

    Zhang, Ming

    2005-01-01

    The primary goal of this project was to perform theoretical calculations of propagation of cosmic rays and energetic particles in 3-dimensional heliospheric magnetic fields. We used Markov stochastic process simulation to achieve to this goal. We developed computation software that can be used to study particle propagation in, as two examples of heliospheric magnetic fields that have to be treated in 3 dimensions, a heliospheric magnetic field suggested by Fisk (1996) and a global heliosphere including the region beyond the termination shock. The results from our model calculations were compared with particle measurements from Ulysses, Earth-based spacecraft such as IMP-8, WIND and ACE, Voyagers and Pioneers in outer heliosphere for tests of the magnetic field models. We particularly looked for features of particle variations that can allow us to significantly distinguish the Fisk magnetic field from the conventional Parker spiral field. The computer code will eventually lead to a new generation of integrated software for solving complicated problems of particle acceleration, propagation and modulation in realistic 3-dimensional heliosphere of realistic magnetic fields and the solar wind with a single computation approach.

  12. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  13. The Fermilab Central Computing Facility architectural model

    SciTech Connect

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs.

  14. Computer Modeling of a Fusion Plasma

    SciTech Connect

    Cohen, B I

    2000-12-15

    Progress in the study of plasma physics and controlled fusion has been profoundly influenced by dramatic increases in computing capability. Computational plasma physics has become an equal partner with experiment and traditional theory. This presentation illustrates some of the progress in computer modeling of plasma physics and controlled fusion.

  15. Model of computation for Fourier optical processors

    NASA Astrophysics Data System (ADS)

    Naughton, Thomas J.

    2000-05-01

    We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.

  16. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  17. Predictive Models and Computational Toxicology

    EPA Science Inventory

    Understanding the potential health risks posed by environmental chemicals is a significant challenge elevated by the large number of diverse chemicals with generally uncharacterized exposures, mechanisms, and toxicities. The ToxCast computational toxicology research program was l...

  18. A model nursing computer resource center.

    PubMed

    Mueller, Sheryl S; Pullen, Richard L; McGee, K Sue

    2002-01-01

    Nursing graduates are required to demonstrate computer technology skills and critical reflective thinking skills in the workplace. The authors discuss a model computer resource center that enhances the acquisition of these requisite skills by students in both an associate degree and vocational nursing program. The computer resource center maximizes student learning and promotes faculty effectiveness and efficiency by a "full-service" approach to computerized testing, information technology instruction, online research, and interactive computer program practice. PMID:12023644

  19. Computational modeling of vascular anastomoses.

    PubMed

    Migliavacca, Francesco; Dubini, Gabriele

    2005-06-01

    Recent development of computational technology allows a level of knowledge of biomechanical factors in the healthy or pathological cardiovascular system that was unthinkable a few years ago. In particular, computational fluid dynamics (CFD) and computational structural (CS) analyses have been used to evaluate specific quantities, such as fluid and wall stresses and strains, which are very difficult to measure in vivo. Indeed, CFD and CS offer much more variability and resolution than in vitro and in vivo methods, yet computations must be validated by careful comparison with experimental and clinical data. The enormous parallel development of clinical imaging such as magnetic resonance or computed tomography opens a new way toward a detailed patient-specific description of the actual hemodynamics and structural behavior of living tissues. Coupling of CFD/CS and clinical images is becoming a standard evaluation that is expected to become part of the clinical practice in the diagnosis and in the surgical planning in advanced medical centers. This review focuses on computational studies of fluid and structural dynamics of a number of vascular anastomoses: the coronary bypass graft anastomoses, the arterial peripheral anastomoses, the arterio-venous graft anastomoses and the vascular anastomoses performed in the correction of congenital heart diseases. PMID:15772842

  20. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  1. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  2. Synthetic computational models of selective attention.

    PubMed

    Raffone, Antonino

    2006-11-01

    Computational modeling plays an important role to understand the mechanisms of attention. In this framework, synthetic computational models can uniquely contribute to integrate different explanatory levels and neurocognitive findings, with special reference to the integration of attention and awareness processes. Novel combined experimental and computational investigations can lead to important insights, as in the revived domain of neural correlates of attention- and awareness-related meditation states and traits.

  3. COLD-SAT Dynamic Model Computer Code

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.; Adams, N. S.

    1995-01-01

    COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.

  4. Applications of computer modeling to fusion research

    SciTech Connect

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

  5. High heat program, thermal hydraulic computer models

    SciTech Connect

    Ogden, D.M.

    1998-03-05

    The purpose of this report is to describe the thermal hydraulic computer models, the computer model benchmarking and methodology to be used in performing the analysis necessary for the resolution of the high heat safety issue for Tank 241-C, -106.

  6. Leverage points in a computer model

    NASA Astrophysics Data System (ADS)

    Janošek, Michal

    2016-06-01

    This article is focused on the analysis of the leverage points (developed by D. Meadows) in a computer model. The goal is to find out if there is a possibility to find these points of leverage in a computer model (on the example of a predator-prey model) and to determine how the particular parameters, their ranges and monitored variables of the model are associated with the concept of leverage points.

  7. Model Railroading and Computer Fundamentals

    ERIC Educational Resources Information Center

    McCormick, John W.

    2007-01-01

    Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…

  8. 3DIVS: 3-Dimensional Immersive Virtual Sculpting

    SciTech Connect

    Kuester, F; Duchaineau, M A; Hamann, B; Joy, K I; Uva, A E

    2001-10-03

    Virtual Environments (VEs) have the potential to revolutionize traditional product design by enabling the transition from conventional CAD to fully digital product development. The presented prototype system targets closing the ''digital gap'' as introduced by the need for physical models such as clay models or mockups in the traditional product design and evaluation cycle. We describe a design environment that provides an intuitive human-machine interface for the creation and manipulation of three-dimensional (3D) models in a semi-immersive design space, focusing on ease of use and increased productivity for both designer and CAD engineers.

  9. A critical evaluation of secondary cancer risk models applied to Monte Carlo dose distributions of 2-dimensional, 3-dimensional conformal and hybrid intensity-modulated radiation therapy for breast cancer

    NASA Astrophysics Data System (ADS)

    Joosten, A.; Bochud, F.; Moeckli, R.

    2014-08-01

    The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable

  10. A critical evaluation of secondary cancer risk models applied to Monte Carlo dose distributions of 2-dimensional, 3-dimensional conformal and hybrid intensity-modulated radiation therapy for breast cancer.

    PubMed

    Joosten, A; Bochud, F; Moeckli, R

    2014-08-21

    The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable

  11. Percutaneous Transcatheter Mitral Valve Replacement: Patient-specific Three-dimensional Computer-based Heart Model and Prototyping.

    PubMed

    Vaquerizo, Beatriz; Theriault-Lauzier, Pascal; Piazza, Nicolo

    2015-12-01

    Mitral regurgitation is the most prevalent valvular heart disease worldwide. Despite the widespread availability of curative surgical intervention, a considerable proportion of patients with severe mitral regurgitation are not referred for treatment, largely due to the presence of left ventricular dysfunction, advanced age, and comorbid illnesses. Transcatheter mitral valve replacement is a promising therapeutic alternative to traditional surgical valve replacement. The complex anatomical and pathophysiological nature of the mitral valvular complex, however, presents significant challenges to the successful design and implementation of novel transcatheter mitral replacement devices. Patient-specific 3-dimensional computer-based models enable accurate assessment of the mitral valve anatomy and preprocedural simulations for transcatheter therapies. Such information may help refine the design features of novel transcatheter mitral devices and enhance procedural planning. Herein, we describe a novel medical image-based processing tool that facilitates accurate, noninvasive assessment of the mitral valvular complex, by creating precise three-dimensional heart models. The 3-dimensional computer reconstructions are then converted to a physical model using 3-dimensional printing technology, thereby enabling patient-specific assessment of the interaction between device and patient. It may provide new opportunities for a better understanding of the mitral anatomy-pathophysiology-device interaction, which is of critical importance for the advancement of transcatheter mitral valve replacement.

  12. Computational modeling of peripheral pain: a commentary.

    PubMed

    Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S

    2015-06-11

    This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications.

  13. Computational modeling of peripheral pain: a commentary.

    PubMed

    Argüello, Erick J; Silva, Ricardo J; Huerta, Mónica K; Avila, René S

    2015-01-01

    This commentary is intended to find possible explanations for the low impact of computational modeling on pain research. We discuss the main strategies that have been used in building computational models for the study of pain. The analysis suggests that traditional models lack biological plausibility at some levels, they do not provide clinically relevant results, and they cannot capture the stochastic character of neural dynamics. On this basis, we provide some suggestions that may be useful in building computational models of pain with a wider range of applications. PMID:26062616

  14. Hybrid modeling in computational neuropsychiatry.

    PubMed

    Marin-Sanguino, A; Mendoza, E R

    2008-09-01

    The aim of building mathematical models is to provide a formal structure to explain the behaviour of a whole in terms of its parts. In the particular case of neuropsychiatry, the available information upon which models are to be built is distributed over several fields of expertise. Molecular and cellular biologists, physiologists and clinicians all hold valuable information about the system which has to be distilled into a unified view. Furthermore, modelling is not a sequential process in which the roles of field and modelling experts are separated. Model building is done through iterations in which all the parts have to keep an active role. This work presents some modelling techniques and guidelines on how they can be combined in order to simplify modelling efforts in neuropsychiatry. The proposed approach involves two well known modelling techniques, Petri nets and Biochemical System Theory that provide a general well proven structured definition for biological models.

  15. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  16. Introducing Seismic Tomography with Computational Modeling

    NASA Astrophysics Data System (ADS)

    Neves, R.; Neves, M. L.; Teodoro, V.

    2011-12-01

    Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.

  17. Computational Model for Corneal Transplantation

    NASA Astrophysics Data System (ADS)

    Cabrera, Delia

    2003-10-01

    We evaluated the refractive consequences of corneal transplants using a biomechanical model with homogeneous and inhomogeneous Young's modulus distributions within the cornea, taking into account ablation of some stromal tissue. A FEM model was used to simulate corneal transplants in diseased cornea. The diseased cornea was modeled as an axisymmetric structure taking into account a nonlinearly elastic, isotropic formulation. The model simulating the penetrating keratoplasty procedure gives more change in the postoperative corneal curvature when compared to the models simulating the anterior and posterior lamellar graft procedures. When a lenticle shaped tissue was ablated in the graft during the anterior and posterior keratoplasty, the models provided an additional correction of about -3.85 and -4.45 diopters, respectively. Despite the controversy around the corneal thinning disorders treatment with volume removal procedures, results indicate that significant changes in corneal refractive power could be introduced by a corneal transplantation combined with myopic laser ablation.

  18. Teaching Environmental Systems Modelling Using Computer Simulation.

    ERIC Educational Resources Information Center

    Moffatt, Ian

    1986-01-01

    A computer modeling course in environmental systems and dynamics is presented. The course teaches senior undergraduates to analyze a system of interest, construct a system flow chart, and write computer programs to simulate real world environmental processes. An example is presented along with a course evaluation, figures, tables, and references.…

  19. Modeling communication in cluster computing

    SciTech Connect

    Stoica, I.; Sultan, F.; Keyes, D.

    1995-12-01

    We introduce a model for communication costs in parallel processing environments, called the {open_quotes}hyperbolic model,{close_quotes} that generalizes two-parameter dedicated-link models in an analytically simple way. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes and internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. A CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. Rules are given for reducing a communication graph consisting of many CBs to an equivalent two-parameter form, while maintaining a good approximation for the service time. In [4] we demonstrate a tight fit of the estimates of the cost of communication based on our model with actual measurements of the communication and synchronization time between end processes. We, also show the compatibility of our model (to within a factor of 3/4) with the recently proposed LogP model.

  20. Computer modeling of human decision making

    NASA Technical Reports Server (NTRS)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  1. Predictive Models and Computational Embryology

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  2. Computational modeling of ultraviolet disinfection.

    PubMed

    Younis, B A; Yang, T H

    2010-01-01

    The efficient design of ultraviolet light (UV) systems for water and wastewater treatment requires detailed knowledge of the patterns of fluid motion that occur in the disinfection channel. This knowledge is increasingly being obtained using Computational Fluid Dynamics (CFD) software packages that solve the equations governing turbulent fluid-flow motion. In this work, we present predictions of the patterns of flow and the extent of disinfection in a conventional reactor consisting of an open channel with an array of UV lamps placed with their axes perpendicular to the direction of flow. It is shown that the resulting flow is inherently unsteady due to the regular shedding of vortices from the submerged lamps. It is also shown that the accurate prediction of the hydraulic residence time and, consequently, the extent of disinfection is strongly dependent on the ability of the CFD method to capture the occurrence and strength of the vortex shedding, and its effects on the turbulent mixing processes.

  3. Computer Model Locates Environmental Hazards

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Catherine Huybrechts Burton founded San Francisco-based Endpoint Environmental (2E) LLC in 2005 while she was a student intern and project manager at Ames Research Center with NASA's DEVELOP program. The 2E team created the Tire Identification from Reflectance model, which algorithmically processes satellite images using turnkey technology to retain only the darkest parts of an image. This model allows 2E to locate piles of rubber tires, which often are stockpiled illegally and cause hazardous environmental conditions and fires.

  4. Enhanced absorption cycle computer model

    NASA Astrophysics Data System (ADS)

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.

  5. Applications of Computational Modeling in Cardiac Surgery

    PubMed Central

    Lee, Lik Chuan; Genet, Martin; Dang, Alan B.; Ge, Liang; Guccione, Julius M.; Ratcliffe, Mark B.

    2014-01-01

    Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036

  6. A new epidemic model of computer viruses

    NASA Astrophysics Data System (ADS)

    Yang, Lu-Xing; Yang, Xiaofan

    2014-06-01

    This paper addresses the epidemiological modeling of computer viruses. By incorporating the effect of removable storage media, considering the possibility of connecting infected computers to the Internet, and removing the conservative restriction on the total number of computers connected to the Internet, a new epidemic model is proposed. Unlike most previous models, the proposed model has no virus-free equilibrium and has a unique endemic equilibrium. With the aid of the theory of asymptotically autonomous systems as well as the generalized Poincare-Bendixson theorem, the endemic equilibrium is shown to be globally asymptotically stable. By analyzing the influence of different system parameters on the steady number of infected computers, a collection of policies is recommended to prohibit the virus prevalence.

  7. Computer Model Buildings Contaminated with Radioactive Material

    1998-05-19

    The RESRAD-BUILD computer code is a pathway analysis model designed to evaluate the potential radiological dose incurred by an individual who works or lives in a building contaminated with radioactive material.

  8. Computational modeling and multilevel cancer control interventions.

    PubMed

    Morrissey, Joseph P; Lich, Kristen Hassmiller; Price, Rebecca Anhang; Mandelblatt, Jeanne

    2012-05-01

    This chapter presents an overview of computational modeling as a tool for multilevel cancer care and intervention research. Model-based analyses have been conducted at various "beneath the skin" or biological scales as well as at various "above the skin" or socioecological levels of cancer care delivery. We review the basic elements of computational modeling and illustrate its applications in four cancer control intervention areas: tobacco use, colorectal cancer screening, cervical cancer screening, and racial disparities in access to breast cancer care. Most of these models have examined cancer processes and outcomes at only one or two levels. We suggest ways these models can be expanded to consider interactions involving three or more levels. Looking forward, a number of methodological, structural, and communication barriers must be overcome to create useful computational models of multilevel cancer interventions and population health.

  9. Parallel computing in atmospheric chemistry models

    SciTech Connect

    Rotman, D.

    1996-02-01

    Studies of atmospheric chemistry are of high scientific interest, involve computations that are complex and intense, and require enormous amounts of I/O. Current supercomputer computational capabilities are limiting the studies of stratospheric and tropospheric chemistry and will certainly not be able to handle the upcoming coupled chemistry/climate models. To enable such calculations, the authors have developed a computing framework that allows computations on a wide range of computational platforms, including massively parallel machines. Because of the fast paced changes in this field, the modeling framework and scientific modules have been developed to be highly portable and efficient. Here, the authors present the important features of the framework and focus on the atmospheric chemistry module, named IMPACT, and its capabilities. Applications of IMPACT to aircraft studies will be presented.

  10. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas. PMID:27354192

  11. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  12. Economic Analysis. Computer Simulation Models.

    ERIC Educational Resources Information Center

    Sterling Inst., Washington, DC. Educational Technology Center.

    A multimedia course in economic analysis was developed and used in conjunction with the United States Naval Academy. (See ED 043 790 and ED 043 791 for final reports of the project evaluation and development model.) This volume of the text discusses the simulation of behavioral relationships among variable elements in an economy and presents…

  13. Computational study of lattice models

    NASA Astrophysics Data System (ADS)

    Zujev, Aleksander

    This dissertation is composed of the descriptions of a few projects undertook to complete my doctorate at the University of California, Davis. Different as they are, the common feature of them is that they all deal with simulations of lattice models, and physics which results from interparticle interactions. As an example, both the Feynman-Kikuchi model (Chapter 3) and Bose-Fermi mixture (Chapter 4) deal with the conditions under which superfluid transitions occur. The dissertation is divided into two parts. Part I (Chapters 1-2) is theoretical. It describes the systems we study - superfluidity and particularly superfluid helium, and optical lattices. The numerical methods of working with them are described. The use of Monte Carlo methods is another unifying theme of the different projects in this thesis. Part II (Chapters 3-6) deals with applications. It consists of 4 chapters describing different projects. Two of them, Feynman-Kikuchi model, and Bose-Fermi mixture are finished and published. The work done on t - J model, described in Chapter 5, is more preliminary, and the project is far from complete. A preliminary report on it was given on 2009 APS March meeting. The Isentropic project, described in the last chapter, is finished. A report on it was given on 2010 APS March meeting, and a paper is in preparation. The quantum simulation program used for Bose-Fermi mixture project was written by our collaborators Valery Rousseau and Peter Denteneer. I had written my own code for the other projects.

  14. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  15. Computer Modeling of Direct Metal Laser Sintering

    NASA Technical Reports Server (NTRS)

    Cross, Matthew

    2014-01-01

    A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.

  16. Computational Modeling of Biological Development

    NASA Astrophysics Data System (ADS)

    Glazier, James

    2005-03-01

    The patterns of gene expression are only part of the complex set of processes that govern the formation of tissue structures during embryonic development. Cells need to differentiate and to migrate long distances through tissues. How do they know what to become and where to go? Cells secrete and follow gradients of diffusible chemicals (chemotaxis) and secrete non-diffusing extracellular matrix. In addition, variable adhesion molecules expressed on cells' surfaces help them to form coherent structures by differential adhesion. CompuCell is a public domain modeling environment which implements a simple, energy minimization framework to describe these and related morphogenetic processes. One attractive feature of this approach is that it can interface at small length scales with increasingly sophisticated models for genetic regulation and biochemistry inside individual cells and at large length scales with continuum Partial Differential Equation and Finite Element models. We provide examples of how this method applies to problems including the development of the bone structure in the avian wing, the life cycle of the simple organism, Dictyostelium discoideum and to vascular development and show how it ``postdicts'' the results of VE-cadherin knock-out experiments on in vitro vasculogenesis experiments.

  17. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  18. Piping network model program for small computers

    SciTech Connect

    Kruckenberg, N.E.

    1986-07-01

    A model of fluid piping networks was developed to aid in solving problems in the recirculating water coolant system at the Portsmouth Gaseous Diffusion Plant. The piping network model can be used to solve steady state problems in which water flow rates and temperatures are to be determined, or in which temperature is an important factor in determining pressure losses. The model can be implemented on desktop computers to perform these calculations as needed to track changing process conditions. The report includes a description of the coolant system, the mathematical development f the computer model, a case study utilizing the model and a listing and sample run of the computer codes. 2 figs., 1 tab.

  19. Computational modeling of peptide-aptamer binding.

    PubMed

    Rhinehardt, Kristen L; Mohan, Ram V; Srinivas, Goundla

    2015-01-01

    Evolution is the progressive process that holds each living creature in its grasp. From strands of DNA evolution shapes life with response to our ever-changing environment and time. It is the continued study of this most primitive process that has led to the advancement of modern biology. The success and failure in the reading, processing, replication, and expression of genetic code and its resulting biomolecules keep the delicate balance of life. Investigations into these fundamental processes continue to make headlines as science continues to explore smaller scale interactions with increasing complexity. New applications and advanced understanding of DNA, RNA, peptides, and proteins are pushing technology and science forward and together. Today the addition of computers and advances in science has led to the fields of computational biology and chemistry. Through these computational advances it is now possible not only to quantify the end results but also visualize, analyze, and fully understand mechanisms by gaining deeper insights. The biomolecular motion that exists governing the physical and chemical phenomena can now be analyzed with the advent of computational modeling. Ever-increasing computational power combined with efficient algorithms and components are further expanding the fidelity and scope of such modeling and simulations. This chapter discusses computational methods that apply biological processes, in particular computational modeling of peptide-aptamer binding.

  20. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  1. A computational model of the cerebellum

    SciTech Connect

    Travis, B.J.

    1990-01-01

    The need for realistic computational models of neural microarchitecture is growing increasingly apparent. While traditional neural networks have made inroads on understanding cognitive functions, more realism (in the form of structural and connectivity constraints) is required to explain processes such as vision or motor control. A highly detailed computational model of mammalian cerebellum has been developed. It is being compared to physiological recordings for validation purposes. The model is also being used to study the relative contributions of each component to cerebellar processing. 28 refs., 4 figs.

  2. A School Finance Computer Simulation Model

    ERIC Educational Resources Information Center

    Boardman, Gerald R.

    1974-01-01

    Presents a description of the computer simulation model developed by the National Educational Finance Project for use by States in planning and evaluating alternative approaches for State support programs. Provides a general introduction to the model, a program operation overview, a sample run, and some conclusions. (Author/WM)

  3. Modeling User Behavior in Computer Learning Tasks.

    ERIC Educational Resources Information Center

    Mantei, Marilyn M.

    Model building techniques from Artifical Intelligence and Information-Processing Psychology are applied to human-computer interface tasks to evaluate existing interfaces and suggest new and better ones. The model is in the form of an augmented transition network (ATN) grammar which is built by applying grammar induction heuristics on a sequential…

  4. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  5. Computer model of an electromagnetic accelerator

    SciTech Connect

    D'yakov, B.B.; Reznikov, B.I.

    1987-07-01

    The authors examine a computer model of an electromagnetic accelerator (rail gun) with a projected body accelerated by a plasma. They determine the effective length of the accelerator, the electrical efficiency of the equipment, and the plasma parameters. Numerical results are obtained for different parameters of the model electrical circuit. An example of a multisection rail gun is presented.

  6. Computer Modeling and Visualization in Design Technology: An Instructional Model.

    ERIC Educational Resources Information Center

    Guidera, Stan

    2002-01-01

    Design visualization can increase awareness of issues related to perceptual and psychological aspects of design that computer-assisted design and computer modeling may not allow. A pilot university course developed core skills in modeling and simulation using visualization. Students were consistently able to meet course objectives. (Contains 16…

  7. Concepts to accelerate water balance model computation

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Casper, Markus; Gemmar, Peter

    2010-05-01

    Computation time of water balance models has decreased with the increasing performance of CPUs within the last decades. Often, these advantages have been used to enhance the models, e. g. by enlarging spatial resolution or by using smaller simulation time steps. During the last few years, CPU development tended to focus on strong multi core concepts rather than 'simply being generally faster'. Additionally, computer clusters or even computer clouds have become much more commonly available. All these facts again extend our degrees of freedom in simulating water balance models - if the models are able to efficiently use the computer infrastructure. In the following, we present concepts to optimize especially repeated runs and we generally discuss concepts of parallel computing opportunities. Surveyed model In our examinations, we focused on the water balance model LARSIM. In this model, the catchment is subdivided into elements, each of which representing a certain section of a river and its contributory area. Each element is again subdivided into single compartments of homogeneous land use. During the simulation, the relevant hydrological processes are simulated individually for each compartment. The simulated runoff of all compartments leads into the river channel of the corresponding element. Finally, channel routing is simulated for all elements. Optimizing repeated runs During a typical simulation, several input files have to be read before simulation starts: the model structure, the initial model state and meteorological input files. Furthermore, some calculations have to be solved, like interpolating meteorological values. Thus, e. g. the application of Monte Carlo methods will typically use the following algorithm: 1) choose parameters, 2) set parameters in control files, 3) run model, 4) save result, 5) repeat from step 1. Obviously, the third step always includes the previously mentioned steps of reading and preprocessing. Consequently, the model can be

  8. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  9. CDF computing and event data models

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-12-01

    The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

  10. Do's and Don'ts of Computer Models for Planning

    ERIC Educational Resources Information Center

    Hammond, John S., III

    1974-01-01

    Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)

  11. Computational disease modeling – fact or fiction?

    PubMed Central

    Tegnér, Jesper N; Compte, Albert; Auffray, Charles; An, Gary; Cedersund, Gunnar; Clermont, Gilles; Gutkin, Boris; Oltvai, Zoltán N; Stephan, Klaas Enno; Thomas, Randy; Villoslada, Pablo

    2009-01-01

    Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems. PMID:19497118

  12. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  13. Aeroelastic Model Structure Computation for Envelope Expansion

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2007-01-01

    Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.

  14. Solving stochastic epidemiological models using computer algebra

    NASA Astrophysics Data System (ADS)

    Hincapie, Doracelly; Ospina, Juan

    2011-06-01

    Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.

  15. A Novel Method of Orbital Floor Reconstruction Using Virtual Planning, 3-Dimensional Printing, and Autologous Bone.

    PubMed

    Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan

    2016-08-01

    Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures. PMID:27137437

  16. A Novel Method of Orbital Floor Reconstruction Using Virtual Planning, 3-Dimensional Printing, and Autologous Bone.

    PubMed

    Vehmeijer, Maarten; van Eijnatten, Maureen; Liberton, Niels; Wolff, Jan

    2016-08-01

    Fractures of the orbital floor are often a result of traffic accidents or interpersonal violence. To date, numerous materials and methods have been used to reconstruct the orbital floor. However, simple and cost-effective 3-dimensional (3D) printing technologies for the treatment of orbital floor fractures are still sought. This study describes a simple, precise, cost-effective method of treating orbital fractures using 3D printing technologies in combination with autologous bone. Enophthalmos and diplopia developed in a 64-year-old female patient with an orbital floor fracture. A virtual 3D model of the fracture site was generated from computed tomography images of the patient. The fracture was virtually closed using spline interpolation. Furthermore, a virtual individualized mold of the defect site was created, which was manufactured using an inkjet printer. The tangible mold was subsequently used during surgery to sculpture an individualized autologous orbital floor implant. Virtual reconstruction of the orbital floor and the resulting mold enhanced the overall accuracy and efficiency of the surgical procedure. The sculptured autologous orbital floor implant showed an excellent fit in vivo. The combination of virtual planning and 3D printing offers an accurate and cost-effective treatment method for orbital floor fractures.

  17. Computational modeling of laser-tissue interaction

    SciTech Connect

    London, R.A.; Amendt, P.; Bailey, D.S.; Eder, D.C.; Maitland, D.J.; Glinsky, M.E.; Strauss, M.; Zimmerman, G.B.

    1996-05-01

    Computational modeling can play an important role both in designing laser-tissue interaction experiments and in understanding the underlying mechanisms. This can lead to more rapid and less expensive development if new procedures and instruments, and a better understanding of their operation. We have recently directed computer programs and associated expertise developed over many years to model high intensity laser-matter interactions for fusion research towards laser-tissue interaction problem. A program called LATIS is being developed to specifically treat laser-tissue interaction phenomena, such as highly scattering light transport, thermal coagulation, and hydrodynamic motion.

  18. Computational algebraic geometry of epidemic models

    NASA Astrophysics Data System (ADS)

    Rodríguez Vega, Martín.

    2014-06-01

    Computational Algebraic Geometry is applied to the analysis of various epidemic models for Schistosomiasis and Dengue, both, for the case without control measures and for the case where control measures are applied. The models were analyzed using the mathematical software Maple. Explicitly the analysis is performed using Groebner basis, Hilbert dimension and Hilbert polynomials. These computational tools are included automatically in Maple. Each of these models is represented by a system of ordinary differential equations, and for each model the basic reproductive number (R0) is calculated. The effects of the control measures are observed by the changes in the algebraic structure of R0, the changes in Groebner basis, the changes in Hilbert dimension, and the changes in Hilbert polynomials. It is hoped that the results obtained in this paper become of importance for designing control measures against the epidemic diseases described. For future researches it is proposed the use of algebraic epidemiology to analyze models for airborne and waterborne diseases.

  19. Computer modeling of commercial refrigerated warehouse facilities

    SciTech Connect

    Nicoulin, C.V.; Jacobs, P.C.; Tory, S.

    1997-07-01

    The use of computer models to simulate the energy performance of large commercial refrigeration systems typically found in food processing facilities is an area of engineering practice that has seen little development to date. Current techniques employed in predicting energy consumption by such systems have focused on temperature bin methods of analysis. Existing simulation tools such as DOE2 are designed to model commercial buildings and grocery store refrigeration systems. The HVAC and Refrigeration system performance models in these simulations tools model equipment common to commercial buildings and groceries, and respond to energy-efficiency measures likely to be applied to these building types. The applicability of traditional building energy simulation tools to model refrigerated warehouse performance and analyze energy-saving options is limited. The paper will present the results of modeling work undertaken to evaluate energy savings resulting from incentives offered by a California utility to its Refrigerated Warehouse Program participants. The TRNSYS general-purpose transient simulation model was used to predict facility performance and estimate program savings. Custom TRNSYS components were developed to address modeling issues specific to refrigerated warehouse systems, including warehouse loading door infiltration calculations, an evaporator model, single-state and multi-stage compressor models, evaporative condenser models, and defrost energy requirements. The main focus of the paper will be on the modeling approach. The results from the computer simulations, along with overall program impact evaluation results, will also be presented.

  20. Computational Spectrum of Agent Model Simulation

    SciTech Connect

    Perumalla, Kalyan S

    2010-01-01

    The study of human social behavioral systems is finding renewed interest in military, homeland security and other applications. Simulation is the most generally applied approach to studying complex scenarios in such systems. Here, we outline some of the important considerations that underlie the computational aspects of simulation-based study of human social systems. The fundamental imprecision underlying questions and answers in social science makes it necessary to carefully distinguish among different simulation problem classes and to identify the most pertinent set of computational dimensions associated with those classes. We identify a few such classes and present their computational implications. The focus is then shifted to the most challenging combinations in the computational spectrum, namely, large-scale entity counts at moderate to high levels of fidelity. Recent developments in furthering the state-of-the-art in these challenging cases are outlined. A case study of large-scale agent simulation is provided in simulating large numbers (millions) of social entities at real-time speeds on inexpensive hardware. Recent computational results are identified that highlight the potential of modern high-end computing platforms to push the envelope with respect to speed, scale and fidelity of social system simulations. Finally, the problem of shielding the modeler or domain expert from the complex computational aspects is discussed and a few potential solution approaches are identified.

  1. Use of computational modeling approaches in studying the binding interactions of compounds with human estrogen receptors.

    PubMed

    Wang, Pan; Dang, Li; Zhu, Bao-Ting

    2016-01-01

    Estrogens have a whole host of physiological functions in many human organs and systems, including the reproductive, cardiovascular, and central nervous systems. Many naturally-occurring compounds with estrogenic or antiestrogenic activity are present in our environment and food sources. Synthetic estrogens and antiestrogens are also important therapeutic agents. At the molecular level, estrogen receptors (ERs) mediate most of the well-known actions of estrogens. Given recent advances in computational modeling tools, it is now highly practical to use these tools to study the interaction of human ERs with various types of ligands. There are two common categories of modeling techniques: one is the quantitative structure activity relationship (QSAR) analysis, which uses the structural information of the interacting ligands to predict the binding site properties of a macromolecule, and the other one is molecular docking-based computational analysis, which uses the 3-dimensional structural information of both the ligands and the receptor to predict the binding interaction. In this review, we discuss recent results that employed these and other related computational modeling approaches to characterize the binding interaction of various estrogens and antiestrogens with the human ERs. These examples clearly demonstrate that the computational modeling approaches, when used in combination with other experimental methods, are powerful tools that can precisely predict the binding interaction of various estrogenic ligands and their derivatives with the human ERs.

  2. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  3. The 3-dimensional construction of the Rae craton, central Canada

    NASA Astrophysics Data System (ADS)

    Snyder, David B.; Craven, James A.; Pilkington, Mark; Hillier, Michael J.

    2015-10-01

    Reconstruction of the 3-dimensional tectonic assembly of early continents, first as Archean cratons and then Proterozoic shields, remains poorly understood. In this paper, all readily available geophysical and geochemical data are assembled in a 3-D model with the most accurate bedrock geology in order to understand better the geometry of major structures within the Rae craton of central Canada. Analysis of geophysical observations of gravity and seismic wave speed variations revealed several lithospheric-scale discontinuities in physical properties. Where these discontinuities project upward to correlate with mapped upper crustal geological structures, the discontinuities can be interpreted as shear zones. Radiometric dating of xenoliths provides estimates of rock types and ages at depth beneath sparse kimberlite occurrences. These ages can also be correlated to surface rocks. The 3.6-2.6 Ga Rae craton comprises at least three smaller continental terranes, which "cratonized" during a granitic bloom. Cratonization probably represents final differentiation of early crust into a relatively homogeneous, uniformly thin (35-42 km), tonalite-trondhjemite-granodiorite crust with pyroxenite layers near the Moho. The peak thermotectonic event at 1.86-1.7 Ga was associated with the Hudsonian orogeny that assembled several cratons and lesser continental blocks into the Canadian Shield using a number of southeast-dipping megathrusts. This orogeny metasomatized, mineralized, and recrystallized mantle and lower crustal rocks, apparently making them more conductive by introducing or concentrating sulfides or graphite. Little evidence exists of thin slabs similar to modern oceanic lithosphere in this Precambrian construction history whereas underthrusting and wedging of continental lithosphere is inferred from multiple dipping discontinuities.

  4. A 3-Dimensional Anatomic Study of the Distal Biceps Tendon

    PubMed Central

    Walton, Christine; Li, Zhi; Pennings, Amanda; Agur, Anne; Elmaraghy, Amr

    2015-01-01

    Background Complete rupture of the distal biceps tendon from its osseous attachment is most often treated with operative intervention. Knowledge of the overall tendon morphology as well as the orientation of the collagenous fibers throughout the musculotendinous junction are key to intraoperative decision making and surgical technique in both the acute and chronic setting. Unfortunately, there is little information available in the literature. Purpose To comprehensively describe the morphology of the distal biceps tendon. Study Design Descriptive laboratory study. Methods The distal biceps terminal musculature, musculotendinous junction, and tendon were digitized in 10 cadaveric specimens and data reconstructed using 3-dimensional modeling. Results The average length, width, and thickness of the external distal biceps tendon were found to be 63.0, 6.0, and 3.0 mm, respectively. A unique expansion of the tendon fibers within the distal muscle was characterized, creating a thick collagenous network along the central component between the long and short heads. Conclusion This study documents the morphologic parameters of the native distal biceps tendon. Reconstruction may be necessary, especially in chronic distal biceps tendon ruptures, if the remaining tendon morphology is significantly compromised compared with the native distal biceps tendon. Knowledge of normal anatomical distal biceps tendon parameters may also guide the selection of a substitute graft with similar morphological characteristics. Clinical Relevance A thorough description of distal biceps tendon morphology is important to guide intraoperative decision making between primary repair and reconstruction and to better select the most appropriate graft. The detailed description of the tendinous expansion into the muscle may provide insight into better graft-weaving and suture-grasping techniques to maximize proximal graft incorporation. PMID:26665092

  5. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  6. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology. PMID:25790483

  7. Differential Cross Section Kinematics for 3-dimensional Transport Codes

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Dick, Frank

    2008-01-01

    In support of the development of 3-dimensional transport codes, this paper derives the relevant relativistic particle kinematic theory. Formulas are given for invariant, spectral and angular distributions in both the lab (spacecraft) and center of momentum frames, for collisions involving 2, 3 and n - body final states.

  8. Controlled teleportation of a 3-dimensional bipartite quantum state

    NASA Astrophysics Data System (ADS)

    Cao, Hai-Jing; Chen, Zhong-Hua; Song, He-Shan

    2008-07-01

    A controlled teleportation scheme of an unknown 3-dimensional (3D) two-particle quantum state is proposed, where a 3D Bell state and 3D GHZ state function as the quantum channel. This teleportation scheme can be directly generalized to teleport an unknown d-dimensional bipartite quantum state.

  9. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  10. Images as a basis for computer modelling

    NASA Astrophysics Data System (ADS)

    Beaufils, D.; LeTouzé, J.-C.; Blondel, F.-M.

    1994-03-01

    New computer technologies such as the graphics data tablet, video digitization and numerical methods, can be used for measurement and mathematical modelling in physics. Two programs dealing with newtonian mechanics and some of related scientific activities for A-level students are described.

  11. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  12. Computer Modelling of Photochemical Smog Formation

    ERIC Educational Resources Information Center

    Huebert, Barry J.

    1974-01-01

    Discusses a computer program that has been used in environmental chemistry courses as an example of modelling as a vehicle for teaching chemical dynamics, and as a demonstration of some of the factors which affect the production of smog. (Author/GS)

  13. Informing Mechanistic Toxicology with Computational Molecular Models

    EPA Science Inventory

    Computational molecular models of chemicals interacting with biomolecular targets provides toxicologists a valuable, affordable, and sustainable source of in silico molecular level information that augments, enriches, and complements in vitro and in vivo effo...

  14. A Dualistic Model To Describe Computer Architectures

    NASA Astrophysics Data System (ADS)

    Nitezki, Peter; Engel, Michael

    1985-07-01

    The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.

  15. Dealing with Diversity in Computational Cancer Modeling

    PubMed Central

    Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.

    2013-01-01

    This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360

  16. Computer Model Of Fragmentation Of Atomic Nuclei

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.

    1995-01-01

    High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.

  17. Processor core model for quantum computing.

    PubMed

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-01

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  18. A 3-dimensional Analysis of the Cassiopeia A Supernova Remnant

    NASA Astrophysics Data System (ADS)

    Isensee, Karl

    We present a multi-wavelength study of the nearby supernova remnant Cassiopeia A (Cas A). Easily resolvable supernova remnants such as Cas A provide a unique opportunity to test supernova explosion models. Additionally, we can observe key processes in the interstellar medium as the ejecta from the initial explosion encounter Cas A's powerful shocks. In order to accomplish these science goals, we used the Spitzer Space Telescope's Infrared Spectrograph to create a high resolution spectral map of select regions of Cas A, allowing us to make a Doppler reconstruction of its 3-dimensional structure structure. In the center of the remnant, we find relatively pristine ejecta that have not yet reached Cas A's reverse shock or interacted with the circumstellar environment. We observe O, Si, and S emission. These ejecta can form both sheet-like structures as well as filaments. Si and O, which come from different nucleosynthetic layers of the star, are observed to be coincident in some regions, and separated by >500 km s -1 in others. Observed ejecta traveling toward us are, on average, ˜800 km s -1 slower than the material traveling away from us. We compare our observations to recent supernova explosion models and find that no single model can simultaneously reproduce all the observed features. However, models of different supernova explosions can collectively produce the observed geometries and structures of the emission interior to Cas A's reverse shock. We use the results from the models to address the conditions during the supernova explosion, concentrating on asymmetries in the shock structure. We also predict that the back surface of Cassiopeia A will begin brightening in ∼30 years, and the front surface in ˜100 years. We then used similar observations from 3 regions on Cas A's reverse shock in order to create more 3-dimensional maps. In these regions, we observe supernova ejecta both immediately before and during the shock-ejecta interaction. We determine that the

  19. Computer Model Predicts the Movement of Dust

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center

  20. Computational Modeling of Vortex Generators for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, R. V.

    2002-01-01

    In this work computational models were developed and used to investigate applications of vortex generators (VGs) to turbomachinery. The work was aimed at increasing the efficiency of compressor components designed for the NASA Ultra Efficient Engine Technology (UEET) program. Initial calculations were used to investigate the physical behavior of VGs. A parametric study of the effects of VG height was done using 3-D calculations of isolated VGs. A body force model was developed to simulate the effects of VGs without requiring complicated grids. The model was calibrated using 2-D calculations of the VG vanes and was validated using the 3-D results. Then three applications of VGs to a compressor rotor and stator were investigated: 1) The results of the 3-D calculations were used to simulate the use of small casing VGs used to generate rotor preswirl or counterswirl. Computed performance maps were used to evaluate the effects of VGs. 2) The body force model was used to simulate large part-span splitters on the casing ahead of the stator. Computed loss buckets showed the effects of the VGs. 3) The body force model was also used to investigate the use of tiny VGs on the stator suction surface for controlling secondary flows. Near-surface particle traces and exit loss profiles were used to evaluate the effects of the VGs.

  1. Queuing theory models for computer networks

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.

  2. Computational models of natural language processing

    SciTech Connect

    Bara, B.G.; Guida, G.

    1984-01-01

    The main concern in this work is the illustration of models for natural language processing, and the discussion of their role in the development of computational studies of language. Topics covered include the following: competence and performance in the design of natural language systems; planning and understanding speech acts by interpersonal games; a framework for integrating syntax and semantics; knowledge representation and natural language: extending the expressive power of proposition nodes; viewing parsing as word sense discrimination: a connectionist approach; a propositional language for text representation; from topic and focus of a sentence to linking in a text; language generation by computer; understanding the Chinese language; semantic primitives or meaning postulates: mental models of propositional representations; narrative complexity based on summarization algorithms; using focus to constrain language generation; and towards an integral model of language competence.

  3. A computational model of bleb formation

    PubMed Central

    Strychalski, Wanda; Guy, Robert D.

    2013-01-01

    Blebbing occurs when the cytoskeleton detaches from the cell membrane, resulting in the pressure-driven flow of cytosol towards the area of detachment and the local expansion of the cell membrane. Recent interest has focused on cells that use blebbing for migrating through 3D fibrous matrices. In particular, metastatic cancer cells have been shown to use blebs for motility. A dynamic computational model of the cell is presented that includes mechanics of and the interactions between the intracellular fluid, the actin cortex and the cell membrane. The computational model is used to explore the relative roles in bleb formation time of cytoplasmic viscosity and drag between the cortex and the cytosol. A regime of values for the drag coefficient and cytoplasmic viscosity values that match bleb formation timescales is presented. The model results are then used to predict the Darcy permeability and the volume fraction of the cortex. PMID:22294562

  4. Neural network models for optical computing

    SciTech Connect

    Athale, R.A. ); Davis, J. )

    1988-01-01

    This volume comprises the record of the conference on neural network models for optical computing. In keeping with the interdisciplinary nature of the field, the invited papers are from diverse research areas, such as neuroscience, parallel architectures, neural modeling, and perception. The papers consist of three major classes: applications of optical neural nets for pattern classification, analysis, and image formation; development and analysis of neural net models that are particularly suited for optical implementation; experimental demonstrations of optical neural nets, particularly with adaptive interconnects.

  5. Computing the complexity for Schelling segregation models

    NASA Astrophysics Data System (ADS)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  6. Contribution of seismic processing to put up the scaffolding for the 3-dimensional study of deep sedimentary basins: the fundaments of trans-national 3D modelling in the project GeoMol

    NASA Astrophysics Data System (ADS)

    Capar, Laure

    2013-04-01

    Within the framework of the transnational project GeoMol geophysical and geological information on the entire Molasse Basin and on the Po Basin are gathered to build consistent cross-border 3D geological models based on borehole evidence and seismic data. Benefiting from important progress in seismic processing, these new models will provide some answers to various questions regarding the usage of subsurface resources, as there are geothermal energy, CO2 and gas storage, oil and gas production, and support decisions-making to national and local administrations as well as to industries. More than 28 000 km of 2D seismic lines are compiled reprocessed and harmonized. This work faces various problems like the vertical drop of more than 700 meters between West and East of the Molasse Basin and to al lesser extent in the Po Plain, the heterogeneities of the substratum, the large disparities between the period and parameters of seismic acquisition, and depending of their availability, the use of two types of seismic data, raw and processed seismic data. The main challenge is to harmonize all lines at the same reference level, amplitude and step of signal processing from France to Austria, spanning more than 1000 km, to avoid misfits at crossing points between seismic lines and artifacts at the country borders, facilitating the interpretation of the various geological layers in the Molasse Basin and Po Basin. A generalized stratigraphic column for the two basins is set up, representing all geological layers relevant to subsurface usage. This stratigraphy constitutes the harmonized framework for seismic reprocessing. In general, processed seismic data is available on paper at stack stage and the mandatory information to take these seismic lines to the final stage of processing, the migration step, are datum plane and replacement velocity. However several datum planes and replacement velocities were used during previous processing projects. Our processing sequence is to

  7. Contribution of seismic processing to put up the scaffolding for the 3-dimensional study of deep sedimentary basins: the fundaments of trans-national 3D modelling in the project GeoMol

    NASA Astrophysics Data System (ADS)

    Capar, Laure

    2013-04-01

    Within the framework of the transnational project GeoMol geophysical and geological information on the entire Molasse Basin and on the Po Basin are gathered to build consistent cross-border 3D geological models based on borehole evidence and seismic data. Benefiting from important progress in seismic processing, these new models will provide some answers to various questions regarding the usage of subsurface resources, as there are geothermal energy, CO2 and gas storage, oil and gas production, and support decisions-making to national and local administrations as well as to industries. More than 28 000 km of 2D seismic lines are compiled reprocessed and harmonized. This work faces various problems like the vertical drop of more than 700 meters between West and East of the Molasse Basin and to al lesser extent in the Po Plain, the heterogeneities of the substratum, the large disparities between the period and parameters of seismic acquisition, and depending of their availability, the use of two types of seismic data, raw and processed seismic data. The main challenge is to harmonize all lines at the same reference level, amplitude and step of signal processing from France to Austria, spanning more than 1000 km, to avoid misfits at crossing points between seismic lines and artifacts at the country borders, facilitating the interpretation of the various geological layers in the Molasse Basin and Po Basin. A generalized stratigraphic column for the two basins is set up, representing all geological layers relevant to subsurface usage. This stratigraphy constitutes the harmonized framework for seismic reprocessing. In general, processed seismic data is available on paper at stack stage and the mandatory information to take these seismic lines to the final stage of processing, the migration step, are datum plane and replacement velocity. However several datum planes and replacement velocities were used during previous processing projects. Our processing sequence is to

  8. Molecular signatures in the prevention of radiation damage by the synergistic effect of N-acetyl cysteine and qingre liyan decoction, a traditional chinese medicine, using a 3-dimensional cell culture model of oral mucositis.

    PubMed

    Lambros, Maria P; Kondapalli, Lavanya; Parsa, Cyrus; Mulamalla, Hari Chandana; Orlando, Robert; Pon, Doreen; Huang, Ying; Chow, Moses S S

    2015-01-01

    Qingre Liyan decoction (QYD), a Traditional Chinese medicine, and N-acetyl cysteine (NAC) have been used to prevent radiation induced mucositis. This work evaluates the protective mechanisms of QYD, NAC, and their combination (NAC-QYD) at the cellular and transcriptional level. A validated organotypic model of oral mucosal consisting of a three-dimensional (3D) cell tissue-culture of primary human keratinocytes exposed to X-ray irradiation was used. Six hours after the irradiation, the tissues were evaluated by hematoxylin and eosin (H and E) and a TUNEL assay to assess histopathology and apoptosis, respectively. Total RNA was extracted and used for microarray gene expression profiling. The tissue-cultures treated with NAC-QYD preserved their integrity and showed no apoptosis. Microarray results revealed that the NAC-QYD caused the upregulation of genes encoding metallothioneins, HMOX1, and other components of the Nrf2 pathway, which protects against oxidative stress. DNA repair genes (XCP, GADD45G, RAD9, and XRCC1), protective genes (EGFR and PPARD), and genes of the NFκB pathway were upregulated. Finally, tissue-cultures treated prophylactically with NAC-QYD showed significant downregulation of apoptosis, cytokines and chemokines genes, and constrained damage-associated molecular patterns (DAMPs). NAC-QYD treatment involves the protective effect of Nrf2, NFκB, and DNA repair factors.

  9. Molecular Sieve Bench Testing and Computer Modeling

    NASA Technical Reports Server (NTRS)

    Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.

    1995-01-01

    The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.

  10. Computational Modeling of Tissue Self-Assembly

    NASA Astrophysics Data System (ADS)

    Neagu, Adrian; Kosztin, Ioan; Jakab, Karoly; Barz, Bogdan; Neagu, Monica; Jamison, Richard; Forgacs, Gabor

    As a theoretical framework for understanding the self-assembly of living cells into tissues, Steinberg proposed the differential adhesion hypothesis (DAH) according to which a specific cell type possesses a specific adhesion apparatus that combined with cell motility leads to cell assemblies of various cell types in the lowest adhesive energy state. Experimental and theoretical efforts of four decades turned the DAH into a fundamental principle of developmental biology that has been validated both in vitro and in vivo. Based on computational models of cell sorting, we have developed a DAH-based lattice model for tissues in interaction with their environment and simulated biological self-assembly using the Monte Carlo method. The present brief review highlights results on specific morphogenetic processes with relevance to tissue engineering applications. Our own work is presented on the background of several decades of theoretical efforts aimed to model morphogenesis in living tissues. Simulations of systems involving about 105 cells have been performed on high-end personal computers with CPU times of the order of days. Studied processes include cell sorting, cell sheet formation, and the development of endothelialized tubes from rings made of spheroids of two randomly intermixed cell types, when the medium in the interior of the tube was different from the external one. We conclude by noting that computer simulations based on mathematical models of living tissues yield useful guidelines for laboratory work and can catalyze the emergence of innovative technologies in tissue engineering.

  11. A computational model of spatial visualization capacity.

    PubMed

    Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A

    2008-09-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.

  12. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  13. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1993-01-01

    Over the past several years, it has been the primary goal of this grant to design and implement software to be used in the conceptual design of aerospace vehicles. The work carried out under this grant was performed jointly with members of the Vehicle Analysis Branch (VAB) of NASA LaRC, Computer Sciences Corp., and Vigyan Corp. This has resulted in the development of several packages and design studies. Primary among these are the interactive geometric modeling tool, the Solid Modeling Aerospace Research Tool (smart), and the integration and execution tools provided by the Environment for Application Software Integration and Execution (EASIE). In addition, it is the purpose of the personnel of this grant to provide consultation in the areas of structural design, algorithm development, and software development and implementation, particularly in the areas of computer aided design, geometric surface representation, and parallel algorithms.

  14. Computational model of retinal photocoagulation and rupture

    NASA Astrophysics Data System (ADS)

    Sramek, Christopher; Paulus, Yannis M.; Nomoto, Hiroyuki; Huie, Phil; Palanker, Daniel

    2009-02-01

    In patterned scanning laser photocoagulation, shorter duration (< 20 ms) pulses help reduce thermal damage beyond the photoreceptor layer, decrease treatment time and minimize pain. However, safe therapeutic window (defined as the ratio of rupture threshold power to that of light coagulation) decreases for shorter exposures. To quantify the extent of thermal damage in the retina, and maximize the therapeutic window, we developed a computational model of retinal photocoagulation and rupture. Model parameters were adjusted to match measured thresholds of vaporization, coagulation, and retinal pigment epithelial (RPE) damage. Computed lesion width agreed with histological measurements in a wide range of pulse durations and power. Application of ring-shaped beam profile was predicted to double the therapeutic window width for exposures in the range of 1 - 10 ms.

  15. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  16. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  17. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  18. Computational Biology: Modeling Chronic Renal Allograft Injury.

    PubMed

    Stegall, Mark D; Borrows, Richard

    2015-01-01

    New approaches are needed to develop more effective interventions to prevent long-term rejection of organ allografts. Computational biology provides a powerful tool to assess the large amount of complex data that is generated in longitudinal studies in this area. This manuscript outlines how our two groups are using mathematical modeling to analyze predictors of graft loss using both clinical and experimental data and how we plan to expand this approach to investigate specific mechanisms of chronic renal allograft injury.

  19. Computed structures of polyimides model compounds

    NASA Technical Reports Server (NTRS)

    Tai, H.; Phillips, D. H.

    1990-01-01

    Using a semi-empirical approach, a computer study was made of 8 model compounds of polyimides. The compounds represent subunits from which NASA Langley Research Center has successfully synthesized polymers for aerospace high performance material application, including one of the most promising, LARC-TPI polymer. Three-dimensional graphic display as well as important molecular structure data pertaining to these 8 compounds are obtained.

  20. Wild Fire Computer Model Helps Firefighters

    SciTech Connect

    Canfield, Jesse

    2012-09-04

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  1. Wild Fire Computer Model Helps Firefighters

    ScienceCinema

    Canfield, Jesse

    2016-07-12

    A high-tech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

  2. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  3. Noncommutative 3 Dimensional Soliton from Multi-instantons

    NASA Astrophysics Data System (ADS)

    Correa, D. H.; Forgacs, P.; Moreno, E. F.; Schaposnik, F. A.; Silva, G. A.

    2004-07-01

    We extend the relation between instanton and monopole solutions of the selfduality equations in SU(2) gauge theory to noncommutative space-times. Using this approach and starting from a noncommutative multi-instanton solution we construct a U(2) monopole configuration which lives in 3 dimensional ordinary space. This configuration resembles the Wu-Yang monopole and satisfies the selfduality (Bogomol'nyi) equations for a U(2) Yang-Mills-Higgs system.

  4. Multimodality 3-Dimensional Image Integration for Congenital Cardiac Catheterization

    PubMed Central

    2014-01-01

    Cardiac catheterization procedures for patients with congenital and structural heart disease are becoming more complex. New imaging strategies involving integration of 3-dimensional images from rotational angiography, magnetic resonance imaging (MRI), computerized tomography (CT), and transesophageal echocardiography (TEE) are employed to facilitate these procedures. We discuss the current use of these new 3D imaging technologies and their advantages and challenges when used to guide complex diagnostic and interventional catheterization procedures in patients with congenital heart disease. PMID:25114757

  5. A 3-Dimensional Absorbed Dose Calculation Method Based on Quantitative SPECT for Radionuclide Therapy: Evaluation for 131I Using Monte Carlo Simulation

    PubMed Central

    Ljungberg, Michael; Sjögreen, Katarina; Liu, Xiaowei; Frey, Eric; Dewaraja, Yuni; Strand, Sven-Erik

    2009-01-01

    A general method is presented for patient-specific 3-dimensional absorbed dose calculations based on quantitative SPECT activity measurements. Methods The computational scheme includes a method for registration of the CT image to the SPECT image and position-dependent compensation for attenuation, scatter, and collimator detector response performed as part of an iterative reconstruction method. A method for conversion of the measured activity distribution to a 3-dimensional absorbed dose distribution, based on the EGS4 (electron-gamma shower, version 4) Monte Carlo code, is also included. The accuracy of the activity quantification and the absorbed dose calculation is evaluated on the basis of realistic Monte Carlo–simulated SPECT data, using the SIMIND (simulation of imaging nuclear detectors) program and a voxel-based computer phantom. CT images are obtained from the computer phantom, and realistic patient movements are added relative to the SPECT image. The SPECT-based activity concentration and absorbed dose distributions are compared with the true ones. Results Correction could be made for object scatter, photon attenuation, and scatter penetration in the collimator. However, inaccuracies were imposed by the limited spatial resolution of the SPECT system, for which the collimator response correction did not fully compensate. Conclusion The presented method includes compensation for most parameters degrading the quantitative image information. The compensation methods are based on physical models and therefore are generally applicable to other radionuclides. The proposed evaluation methodology may be used as a basis for future intercomparison of different methods. PMID:12163637

  6. COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS

    SciTech Connect

    Ibrahim, Essam A

    2013-01-09

    Details of numerical simulations of two-phase gas-solid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the two-phase gas-solid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the two-phase flow is represented by standard atmospheric air. The CFD-based FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.

  7. Mathematical and computational models of plasma flows

    NASA Astrophysics Data System (ADS)

    Brushlinsky, K. V.

    Investigations of plasma flows are of interest, firstly, due to numerous applications, and secondly, because of their general principles, which form a special branch of physics: the plasma dynamics. Numerical simulation and computation, together with theoretic and experimental methods, play an important part in these investigations. Speaking on flows, a relatively dense plasma is mentioned, so its mathematical models appertain to the fluid mechanics, i.e., they are based on the magnetohydrodynamic description of plasma. Time dependent two dimensional models of plasma flows of two wide-spread types are considered: the flows across the magnetic field and those in the magnetic field plane.

  8. Computational fire modeling for aircraft fire research

    SciTech Connect

    Nicolette, V.F.

    1996-11-01

    This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.

  9. ADGEN: ADjoint GENerator for computer models

    SciTech Connect

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  10. Computational Modeling in Structural Materials Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1997-01-01

    High temperature materials such as silicon carbide, a variety of nitrides, and ceramic matrix composites find use in aerospace, automotive, machine tool industries and in high speed civil transport applications. Chemical vapor deposition (CVD) is widely used in processing such structural materials. Variations of CVD include deposition on substrates, coating of fibers, inside cavities and on complex objects, and infiltration within preforms called chemical vapor infiltration (CVI). Our current knowledge of the process mechanisms, ability to optimize processes, and scale-up for large scale manufacturing is limited. In this regard, computational modeling of the processes is valuable since a validated model can be used as a design tool. The effort is similar to traditional chemically reacting flow modeling with emphasis on multicomponent diffusion, thermal diffusion, large sets of homogeneous reactions, and surface chemistry. In the case of CVI, models for pore infiltration are needed. In the present talk, examples of SiC nitride, and Boron deposition from the author's past work will be used to illustrate the utility of computational process modeling.

  11. Ferrofluids: Modeling, numerical analysis, and scientific computation

    NASA Astrophysics Data System (ADS)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  12. Successful Parenchyma-Sparing Anatomical Surgery by 3-Dimensional Reconstruction of Hilar Cholangiocarcinoma Combined with Anatomic Variation.

    PubMed

    Ni, Qihong; Wang, Haolu; Liang, Xiaowen; Zhang, Yunhe; Chen, Wei; Wang, Jian

    2016-06-01

    The combination of hilar cholangiocarcinoma and anatomic variation constitutes a rare and complicated condition. Precise understanding of 3-dimensional position of tumor in the intrahepatic structure in such cases is important for operation planning and navigation. We report a case of a 61-year woman presenting with hilar cholangiocarcinoma. Anatomic variation and tumor location were well depicted on preoperative multidetector computed tomography (MDCT) combined with 3-dimensional reconstruction as the right posterior segmental duct drained to left hepatic duct. The common hepatic duct, biliary confluence, right anterior segmental duct, and right anterior branch of portal vein were involved by the tumor (Bismuth IIIa). After carefully operation planning, we successfully performed a radical parenchyma-sparing anatomical surgery of hilar cholangiocarcinoma: Liver segmentectomy (segments 5 and 8) and caudate lobectomy. MDCTcombined with 3-dimensional reconstruction is a reliable non-invasive modality for preoperative evaluation of hilar cholangiocarcinoma. PMID:27376205

  13. 3-Dimensional modeling of protein structures distinguishes closely related phytoplasmas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phytoplasmas (formerly mycoplasmalike organisms, MLOs) are cell wall-less bacteria that inhabit phloem tissue of plants and are transmitted from plant-to-plant by phloem-feeding insects. Numerous diseases affecting hundreds of plant species in many botanical families are attributed to infections by...

  14. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  15. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  16. Computer modeling of thoracic response to blast.

    PubMed

    Stuhmiller, J H; Chuong, C J; Phillips, Y Y; Dodd, K T

    1988-01-01

    Primary blast injury affects the gas-containing structures of the body. Damage to the lungs with resultant respiratory insufficiency and arterial embolization of air from alveolar pulmonary venous fistulae is the predominant cause of morbidity and mortality following high-level blast exposure. In an effort to generate a widely applicable damage-risk criterion for thoracic injury from blast we are developing a complex computer finite element model (FEM) of the thorax. Taking an engineering approach, a horizontal cross-section of the thorax is divided into small discrete units (finite elements) of homogeneous structure. The necessary physical properties (density, bulk modulus, etc.) are then determined for each element. Specifying the material constants and geometry of the elements, the computer can load the surface of the structure with some force-time function (blast pressure-time history) and calculate the resultant physical events such as displacement, compression, stress, strain, etc. Computer predictions of pressure wave phenomena in the lung parenchyma are compared with trans-bronchially measured pressures in blast-exposed animals. The model should prove useful in assessing the risk of blast injury in diverse overpressure environments and may give insight into pathophysiologic mechanisms and strategies for protection.

  17. Computer Generated Cardiac Model For Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Hills, John F.; Miller, Tom R.

    1981-07-01

    A computer generated mathematical model of a thallium-201 myocardial image is described which is based on realistic geometric and physiological assumptions. The left ventricle is represented by an ellipsoid truncated by aortic and mitral valve planes. Initially, an image of a motionless left ventricle is calculated with the location, size, and relative activity of perfusion defects selected by the designer. The calculation includes corrections for photon attenuation by overlying structures and the relative distribution of activity within the tissues. Motion of the ventricular walls is simulated either by a weighted sum of images at different stages in the cardiac cycle or by a blurring function whose width varies with position. Camera and collimator blurring are estimated by the MTF of the system measured at a representative depth in a phantom. Statistical noise is added using a Poisson random number generator. The usefulness of this model is due to two factors: the a priori characterization of location and extent of perfusion defects and the strong visual similarity of the images to actual clinical studies. These properties should permit systematic evaluation of image processing algorithms using this model. The principles employed in developing this cardiac image model can readily be applied to the simulation of other nuclear medicine studies and to other medical imaging modalities including computed tomography, ultrasound, and digital radiography.

  18. Computer modeling for optimal placement of gloveboxes

    SciTech Connect

    Hench, K.W.; Olivas, J.D.; Finch, P.R.

    1997-08-01

    Reduction of the nuclear weapons stockpile and the general downsizing of the nuclear weapons complex has presented challenges for Los Alamos. One is to design an optimized fabrication facility to manufacture nuclear weapon primary components (pits) in an environment of intense regulation and shrinking budgets. Historically, the location of gloveboxes in a processing area has been determined without benefit of industrial engineering studies to ascertain the optimal arrangement. The opportunity exists for substantial cost savings and increased process efficiency through careful study and optimization of the proposed layout by constructing a computer model of the fabrication process. This paper presents an integrative two- stage approach to modeling the casting operation for pit fabrication. The first stage uses a mathematical technique for the formulation of the facility layout problem; the solution procedure uses an evolutionary heuristic technique. The best solutions to the layout problem are used as input to the second stage - a computer simulation model that assesses the impact of competing layouts on operational performance. The focus of the simulation model is to determine the layout that minimizes personnel radiation exposures and nuclear material movement, and maximizes the utilization of capacity for finished units.

  19. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  20. A computational sensorimotor model of bat echolocation.

    PubMed

    Erwin, H R; Wilson, W W; Moss, C F

    2001-08-01

    A computational sensorimotor model of target capture behavior by the echolocating bat, Eptesicus fuscus, was developed to understand the detection, localization, tracking, and interception of insect prey in a biological sonar system. This model incorporated acoustics, target localization processes, flight aerodynamics, and target capture planning to produce model trajectories replicating those observed in behavioral insect capture trials. Estimates of target range were based on echo delay, azimuth on the relative intensity of the echo at the two ears, and elevation on the spectral pattern of the sonar return in a match/mismatch process. Flapping flight aerodynamics was used to produce realistic model trajectories. Localization in all three spatial dimensions proved necessary to control target tracking and interception for an adequate model of insect capture behavior by echolocating bats. Target capture using maneuvering flight was generally successful when the model's path was controlled by a planning process that made use of an anticipatory internal simulation, while simple homing was successful only for targets directly ahead of the model bat.

  1. Computational models of neurophysiological correlates of tinnitus

    PubMed Central

    Schaette, Roland; Kempter, Richard

    2012-01-01

    The understanding of tinnitus has progressed considerably in the past decade, but the details of the mechanisms that give rise to this phantom perception of sound without a corresponding acoustic stimulus have not yet been pinpointed. It is now clear that tinnitus is generated in the brain, not in the ear, and that it is correlated with pathologically altered spontaneous activity of neurons in the central auditory system. Both increased spontaneous firing rates and increased neuronal synchrony have been identified as putative neuronal correlates of phantom sounds in animal models, and both phenomena can be triggered by damage to the cochlea. Various mechanisms could underlie the generation of such aberrant activity. At the cellular level, decreased synaptic inhibition and increased neuronal excitability, which may be related to homeostatic plasticity, could lead to an over-amplification of natural spontaneous activity. At the network level, lateral inhibition could amplify differences in spontaneous activity, and structural changes such as reorganization of tonotopic maps could lead to self-sustained activity in recurrently connected neurons. However, it is difficult to disentangle the contributions of different mechanisms in experiments, especially since not all changes observed in animal models of tinnitus are necessarily related to tinnitus. Computational modeling presents an opportunity of evaluating these mechanisms and their relation to tinnitus. Here we review the computational models for the generation of neurophysiological correlates of tinnitus that have been proposed so far, and evaluate predictions and compare them to available data. We also assess the limits of their explanatory power, thus demonstrating where an understanding is still lacking and where further research may be needed. Identifying appropriate models is important for finding therapies, and we therefore, also summarize the implications of the models for approaches to treat tinnitus. PMID

  2. Realization of masticatory movement by 3-dimensional simulation of the temporomandibular joint and the masticatory muscles.

    PubMed

    Park, Jong-Tae; Lee, Jae-Gi; Won, Sung-Yoon; Lee, Sang-Hee; Cha, Jung-Yul; Kim, Hee-Jin

    2013-07-01

    Masticatory muscles are closely involved in mastication, pronunciation, and swallowing, and it is therefore important to study the specific functions and dynamics of the mandibular and masticatory muscles. However, the shortness of muscle fibers and the diversity of movement directions make it difficult to study and simplify the dynamics of mastication. The purpose of this study was to use 3-dimensional (3D) simulation to observe the functions and movements of each of the masticatory muscles and the mandible while chewing. To simulate the masticatory movement, computed tomographic images were taken from a single Korean volunteer (30-year-old man), and skull image data were reconstructed in 3D (Mimics; Materialise, Leuven, Belgium). The 3D-reconstructed masticatory muscles were then attached to the 3D skull model. The masticatory movements were animated using Maya (Autodesk, San Rafael, CA) based on the mandibular motion path. During unilateral chewing, the mandible was found to move laterally toward the functional side by contracting the contralateral lateral pterygoid and ipsilateral temporalis muscles. During the initial mouth opening, only hinge movement was observed at the temporomandibular joint. During this period, the entire mandible rotated approximately 13 degrees toward the bicondylar horizontal plane. Continued movement of the mandible to full mouth opening occurred simultaneously with sliding and hinge movements, and the mandible rotated approximately 17 degrees toward the center of the mandibular ramus. The described approach can yield data for use in face animation and other simulation systems and for elucidating the functional components related to contraction and relaxation of muscles during mastication.

  3. Modeling Reality - How Computers Mirror Life

    NASA Astrophysics Data System (ADS)

    Bialynicki-Birula, Iwo; Bialynicka-Birula, Iwona

    2005-01-01

    The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure.

  4. Computer Model Used to Help Customize Medicine

    NASA Technical Reports Server (NTRS)

    Stauber, Laurel J.; Veris, Jenise

    2001-01-01

    Dr. Radhakrishnan, a researcher at the NASA Glenn Research Center, in collaboration with biomedical researchers at the Case Western Reserve University School of Medicine and Rainbow Babies and Children's Hospital, is developing computational models of human physiology that quantitate metabolism and its regulation, in both healthy and pathological states. These models can help predict the effects of stresses or interventions, such as drug therapies, and contribute to the development of customized medicine. Customized medical treatment protocols can give more comprehensive evaluations and lead to more specific and effective treatments for patients, reducing treatment time and cost. Commercial applications of this research may help the pharmaceutical industry identify therapeutic needs and predict drug-drug interactions. Researchers will be able to study human metabolic reactions to particular treatments while in different environments as well as establish more definite blood metabolite concentration ranges in normal and pathological states. These computational models may help NASA provide the background for developing strategies to monitor and safeguard the health of astronauts and civilians in space stations and colonies. They may also help to develop countermeasures that ameliorate the effects of both acute and chronic space exposure.

  5. Computational modeling of Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Grazioli, D.; Magri, M.; Salvadori, A.

    2016-08-01

    This review focuses on energy storage materials modeling, with particular emphasis on Li-ion batteries. Theoretical and computational analyses not only provide a better understanding of the intimate behavior of actual batteries under operational and extreme conditions, but they may tailor new materials and shape new architectures in a complementary way to experimental approaches. Modeling can therefore play a very valuable role in the design and lifetime prediction of energy storage materials and devices. Batteries are inherently multi-scale, in space and time. The macro-structural characteristic lengths (the thickness of a single cell, for instance) are order of magnitudes larger than the particles that form the microstructure of the porous electrodes, which in turn are scale-separated from interface layers at which atomistic intercalations occur. Multi-physics modeling concepts, methodologies, and simulations at different scales, as well as scale transition strategies proposed in the recent literature are here revised. Finally, computational challenges toward the next generation of Li-ion batteries are discussed.

  6. Morphological analysis and preoperative simulation of a double-chambered right ventricle using 3-dimensional printing technology.

    PubMed

    Shirakawa, Takashi; Koyama, Yasushi; Mizoguchi, Hiroki; Yoshitatsu, Masao

    2016-05-01

    We present a case of a double-chambered right ventricle in adulthood, in which we tried a detailed morphological assessment and preoperative simulation using 3-dimensional (3D) heart models for improved surgical planning. Polygonal object data for the heart were constructed from computed tomography images of this patient, and transferred to a desktop 3D printer to print out models in actual size. Medical staff completed all of the work processes. Because the 3D heart models were examined by hand, observed from various viewpoints and measured by callipers with ease, we were able to create an image of the complete form of the heart. The anatomical structure of an anomalous bundle was clearly observed, and surgical approaches to the lesion were simulated accurately. During surgery, we used an incision on the pulmonary infundibulum and resected three muscular components of the stenosis. The similarity between the models and the actual heart was excellent. As a result, the operation for this rare defect was performed safely and successfully. We concluded that the custom-made model was useful for morphological analysis and preoperative simulation. PMID:26860990

  7. Morphological analysis and preoperative simulation of a double-chambered right ventricle using 3-dimensional printing technology.

    PubMed

    Shirakawa, Takashi; Koyama, Yasushi; Mizoguchi, Hiroki; Yoshitatsu, Masao

    2016-05-01

    We present a case of a double-chambered right ventricle in adulthood, in which we tried a detailed morphological assessment and preoperative simulation using 3-dimensional (3D) heart models for improved surgical planning. Polygonal object data for the heart were constructed from computed tomography images of this patient, and transferred to a desktop 3D printer to print out models in actual size. Medical staff completed all of the work processes. Because the 3D heart models were examined by hand, observed from various viewpoints and measured by callipers with ease, we were able to create an image of the complete form of the heart. The anatomical structure of an anomalous bundle was clearly observed, and surgical approaches to the lesion were simulated accurately. During surgery, we used an incision on the pulmonary infundibulum and resected three muscular components of the stenosis. The similarity between the models and the actual heart was excellent. As a result, the operation for this rare defect was performed safely and successfully. We concluded that the custom-made model was useful for morphological analysis and preoperative simulation.

  8. Some queuing network models of computer systems

    NASA Technical Reports Server (NTRS)

    Herndon, E. S.

    1980-01-01

    Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.

  9. Model-based neuroimaging for cognitive computing.

    PubMed

    Poznanski, Roman R

    2009-09-01

    The continuity of the mind is suggested to mean the continuous spatiotemporal dynamics arising from the electrochemical signature of the neocortex: (i) globally through volume transmission in the gray matter as fields of neural activity, and (ii) locally through extrasynaptic signaling between fine distal dendrites of cortical neurons. If the continuity of dynamical systems across spatiotemporal scales defines a stream of consciousness then intentional metarepresentations as templates of dynamic continuity allow qualia to be semantically mapped during neuroimaging of specific cognitive tasks. When interfaced with a computer, such model-based neuroimaging requiring new mathematics of the brain will begin to decipher higher cognitive operations not possible with existing brain-machine interfaces.

  10. Modelling the penumbra in Computed Tomography1

    PubMed Central

    Kueh, Audrey; Warnett, Jason M.; Gibbons, Gregory J.; Brettschneider, Julia; Nichols, Thomas E.; Williams, Mark A.; Kendall, Wilfrid S.

    2016-01-01

    BACKGROUND: In computed tomography (CT), the spot geometry is one of the main sources of error in CT images. Since X-rays do not arise from a point source, artefacts are produced. In particular there is a penumbra effect, leading to poorly defined edges within a reconstructed volume. Penumbra models can be simulated given a fixed spot geometry and the known experimental setup. OBJECTIVE: This paper proposes to use a penumbra model, derived from Beer’s law, both to confirm spot geometry from penumbra data, and to quantify blurring in the image. METHODS: Two models for the spot geometry are considered; one consists of a single Gaussian spot, the other is a mixture model consisting of a Gaussian spot together with a larger uniform spot. RESULTS: The model consisting of a single Gaussian spot has a poor fit at the boundary. The mixture model (which adds a larger uniform spot) exhibits a much improved fit. The parameters corresponding to the uniform spot are similar across all powers, and further experiments suggest that the uniform spot produces only soft X-rays of relatively low-energy. CONCLUSIONS: Thus, the precision of radiographs can be estimated from the penumbra effect in the image. The use of a thin copper filter reduces the size of the effective penumbra. PMID:27232198

  11. Computer retina that models the primate retina

    NASA Astrophysics Data System (ADS)

    Shah, Samir; Levine, Martin D.

    1994-06-01

    At the retinal level, the strategies utilized by biological visual systems allow them to outperform machine vision systems, serving to motivate the design of electronic or `smart' sensors based on similar principles. Design of such sensors in silicon first requires a model of retinal information processing which captures the essential features exhibited by biological retinas. In this paper, a simple retinal model is presented, which qualitatively accounts for the achromatic information processing in the primate cone system. The model exhibits many of the properties found in biological retina such as data reduction through nonuniform sampling, adaptation to a large dynamic range of illumination levels, variation of visual acuity with illumination level, and enhancement of spatio temporal contrast information. The model is validated by replicating experiments commonly performed by electrophysiologists on biological retinas and comparing the response of the computer retina to data from experiments in monkeys. In addition, the response of the model to synthetic images is shown. The experiments demonstrate that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an `artificial retina.'

  12. Computational models of cortical visual processing.

    PubMed Central

    Heeger, D J; Simoncelli, E P; Movshon, J A

    1996-01-01

    The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties. PMID:8570605

  13. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  14. Prenatal diagnosis of holoprosencephaly with ethmocephaly via 3-dimensional sonography.

    PubMed

    Lee, Gui-Se-Ra; Hur, Soo Young; Shin, Jong-Chul; Kim, Soo-Pyung; Kim, Sa Jin

    2006-01-01

    We present the prenatal 3-dimensional (3D) sonographic findings in a case of holoprosencephaly with ethmocephaly at 32 weeks' gestation. The sonographic diagnosis was based on the intracranial findings of a single ventricle and bulb-shaped appearance of the thalami and facial abnormalities, including hypotelorism with proboscis. Chromosome study of the fetus revealed a normal female karyotype (46,XX). Postmortem examination confirmed the 3D sonographic findings. This case demonstrates that the use of 3D sonography improves the imaging and the understanding of the condition of the intracranial abnormalities and the facial anomalies. PMID:16788963

  15. Teaching 1H NMR Spectrometry Using Computer Modeling.

    ERIC Educational Resources Information Center

    Habata, Yoichi; Akabori, Sadatoshi

    2001-01-01

    Molecular modeling by computer is used to display stereochemistry, molecular orbitals, structure of transition states, and progress of reactions. Describes new ideas for teaching 1H NMR spectroscopy using computer modeling. (Contains 12 references.) (ASK)

  16. Computational models of intergroup competition and warfare.

    SciTech Connect

    Letendre, Kenneth; Abbott, Robert G.

    2011-11-01

    This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.

  17. A computational model of motor neuron degeneration.

    PubMed

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L F

    2014-08-20

    To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations.

  18. Computer modeling of thermoelectric generator performance

    NASA Technical Reports Server (NTRS)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  19. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  20. Computer model evaluates heavy oil pumping units

    SciTech Connect

    Brunings, C.; Moya, J.; Morales, J.

    1989-04-10

    The need for Corpoven, S.A., affiliate of Petroleos de Venezuela, S.A., to obtain a model for use in the evaluation of pumping units and downhole equipment in heavy oil wells resulted in the development of an applicable design and optimization technique. All existing models are based on parameters related to wells and equipment for light and medium crudes. Because Venezuela continues to produce large quantities of nonconventional heavy oil, a new computer model was developed. The automation of the artificial lift operations, developed as a pilot project, permitted the monitoring of four wells in a cluster within the Orinoco heavy oil field by a telemetry system and comparison of the new model with existing models. In addition, remote control of sucker rod systems appears to have many advantages such as permanent supervision of a pumping unit, monitoring of preventive maintenance requirements, and close observation of the well behavior. The results of this pilot project are very encouraging, and a study is under way to expand the telemetry system to include more wells from the Orinoco heavy oil field.

  1. Computer Modeling of Non-Isothermal Crystallization

    NASA Technical Reports Server (NTRS)

    Kelton, K. F.; Narayan, K. Lakshmi; Levine, L. E.; Cull, T. C.; Ray, C. S.

    1996-01-01

    A realistic computer model for simulating isothermal and non-isothermal phase transformations proceeding by homogeneous and heterogeneous nucleation and interface-limited growth is presented. A new treatment for particle size effects on the crystallization kinetics is developed and is incorporated into the numerical model. Time-dependent nucleation rates, size-dependent growth rates, and surface crystallization are also included. Model predictions are compared with experimental measurements of DSC/DTA peak parameters for the crystallization of lithium disilicate glass as a function of particle size, Pt doping levels, and water content. The quantitative agreement that is demonstrated indicates that the numerical model can be used to extract key kinetic data from easily obtained calorimetric data. The model can also be used to probe nucleation and growth behavior in regimes that are otherwise inaccessible. Based on a fit to data, an earlier prediction that the time-dependent nucleation rate in a DSC/DTA scan can rise above the steady-state value at a temperature higher than the peak in the steady-state rate is demonstrated.

  2. Incorporating a 3-dimensional printer into the management of early-stage cervical cancer.

    PubMed

    Baek, Min-Hyun; Kim, Dae-Yeon; Kim, Namkug; Rhim, Chae Chun; Kim, Jong-Hyeok; Nam, Joo-Hyun

    2016-08-01

    We used a 3-dimensional (3D) printer to create anatomical replicas of real lesions and tested its application in cervical cancer. Our study patient decided to undergo radical hysterectomy after seeing her 3D model which was then used to plan and simulate this surgery. Using 3D printers to create patient-specific 3D tumor models may aid cervical cancer patients make treatment decisions. This technology will lead to better surgical and oncological outcomes for cervical cancer patients. J. Surg. Oncol. 2016;114:150-152. © 2016 Wiley Periodicals, Inc.

  3. Incorporating a 3-dimensional printer into the management of early-stage cervical cancer.

    PubMed

    Baek, Min-Hyun; Kim, Dae-Yeon; Kim, Namkug; Rhim, Chae Chun; Kim, Jong-Hyeok; Nam, Joo-Hyun

    2016-08-01

    We used a 3-dimensional (3D) printer to create anatomical replicas of real lesions and tested its application in cervical cancer. Our study patient decided to undergo radical hysterectomy after seeing her 3D model which was then used to plan and simulate this surgery. Using 3D printers to create patient-specific 3D tumor models may aid cervical cancer patients make treatment decisions. This technology will lead to better surgical and oncological outcomes for cervical cancer patients. J. Surg. Oncol. 2016;114:150-152. © 2016 Wiley Periodicals, Inc. PMID:27222318

  4. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  5. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-12-18

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  6. Preliminary Phase Field Computational Model Development

    SciTech Connect

    Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  7. Methodical Approaches to Teaching of Computer Modeling in Computer Science Course

    ERIC Educational Resources Information Center

    Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina

    2015-01-01

    The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…

  8. Continuum and computational modeling of flexoelectricity

    NASA Astrophysics Data System (ADS)

    Mao, Sheng

    Flexoelectricity refers to the linear coupling of strain gradient and electric polarization. Early studies of this subject mostly look at liquid crystals and biomembranes. Recently, the advent of nanotechnology revealed its importance also in solid structures, such as flexible electronics, thin films, energy harvesters, etc. The energy storage function of a flexoelectric solid depends not only on polarization and strain, but also strain-gradient. This is our basis to formulate a consistent model of flexoelectric solids under small deformation. We derive a higher-order Navier equation for linear isotropic flexoelectric materials which resembles that of Mindlin in gradient elasticity. Closed-form solutions can be obtained for problems such as beam bending, pressurized tube, etc. Flexoelectric coupling can be enhanced in the vicinity of defects due to strong gradients and decay away in far field. We quantify this expectation by computing elastic and electric fields near different types of defects in flexoelectric solids. For point defects, we recover some well-known results of non-local theories. For dislocations, we make connections with experimental results on NaCl, ice, etc. For cracks, we perform a crack-tip asymptotic analysis and the results share features from gradient elasticity and piezoelectricity. We compute the J integral and use it for determining fracture criteria. Conventional finite element methods formulated solely on displacement are inadequate to treat flexoelectric solids due to higher order governing equations. Therefore, we introduce a mixed formulation which uses displacement and displacement-gradient as separate variables. Their known relation is constrained in a weighted integral sense. We derive a variational formulation for boundary value problems for piezeo- and/or flexoelectric solids. We validate this computational framework against exact solutions. With this method more complex problems, including a plate with an elliptical hole

  9. PROMALS3D: multiple protein sequence alignment enhanced with evolutionary and 3-dimensional structural information

    PubMed Central

    Pei, Jimin; Grishin, Nick V.

    2015-01-01

    SUMMARY Multiple sequence alignment (MSA) is an essential tool with many applications in bioinformatics and computational biology. Accurate MSA construction for divergent proteins remains a difficult computational task. The constantly increasing protein sequences and structures in public databases could be used to improve alignment quality. PROMALS3D is a tool for protein MSA construction enhanced with additional evolutionary and structural information from database searches. PROMALS3D automatically identifies homologs from sequence and structure databases for input proteins, derives structure-based constraints from alignments of 3-dimensional structures, and combines them with sequence-based constraints of profile-profile alignments in a consistency-based framework to construct high-quality multiple sequence alignments. PROMALS3D output is a consensus alignment enriched with sequence and structural information about input proteins and their homologs. PROMALS3D web server and package are available at http://prodata.swmed.edu/PROMALS3D. PMID:24170408

  10. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene. PMID:21811148

  11. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene.

  12. Modeling groundwater flow on massively parallel computers

    SciTech Connect

    Ashby, S.F.; Falgout, R.D.; Fogwell, T.W.; Tompson, A.F.B.

    1994-12-31

    The authors will explore the numerical simulation of groundwater flow in three-dimensional heterogeneous porous media. An interdisciplinary team of mathematicians, computer scientists, hydrologists, and environmental engineers is developing a sophisticated simulation code for use on workstation clusters and MPPs. To date, they have concentrated on modeling flow in the saturated zone (single phase), which requires the solution of a large linear system. they will discuss their implementation of preconditioned conjugate gradient solvers. The preconditioners under consideration include simple diagonal scaling, s-step Jacobi, adaptive Chebyshev polynomial preconditioning, and multigrid. They will present some preliminary numerical results, including simulations of groundwater flow at the LLNL site. They also will demonstrate the code`s scalability.

  13. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.

    1992-01-01

    The goal was the design and implementation of software to be used in the conceptual design of aerospace vehicles. Several packages and design studies were completed, including two software tools currently used in the conceptual level design of aerospace vehicles. These tools are the Solid Modeling Aerospace Research Tool (SMART) and the Environment for Software Integration and Execution (EASIE). SMART provides conceptual designers with a rapid prototyping capability and additionally provides initial mass property analysis. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand alone analysis codes that result in the streamlining of the exchange of data between programs, reducing errors and improving efficiency.

  14. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  15. Computational modeling of diffusion in the cerebellum.

    PubMed

    Marinov, Toma M; Santamaria, Fidel

    2014-01-01

    Diffusion is a major transport mechanism in living organisms. In the cerebellum, diffusion is responsible for the propagation of molecular signaling involved in synaptic plasticity and metabolism, both intracellularly and extracellularly. In this chapter, we present an overview of the cerebellar structure and function. We then discuss the types of diffusion processes present in the cerebellum and their biological importance. We particularly emphasize the differences between extracellular and intracellular diffusion and the presence of tortuosity and anomalous diffusion in different parts of the cerebellar cortex. We provide a mathematical introduction to diffusion and a conceptual overview of various computational modeling techniques. We discuss their scope and their limit of application. Although our focus is the cerebellum, we have aimed at presenting the biological and mathematical foundations as general as possible to be applicable to any other area in biology in which diffusion is of importance.

  16. Innovative modelling techniques in computer vision

    NASA Astrophysics Data System (ADS)

    Ardizzone, Edoardo; Chella, Antonio

    The paper is concerned with two of main research activities currently carried on at the Computer Science and Artificial Intelligence lab of DIE. The first part deals with hybrid artificial vision models, intended to provide object recognition and classification capabilities to an autonomous intelligen system. In this framework, a system recovering 3-D shape information from grey-level images of a scene, building a geometric representation of the scene in terms of superquadrics at the geometric level, and reasoning about the scene at the symbolic level is described. In the second part, attention is focused on automatic indexing of image databases. JACOB, a prototypal system allowing for the automatic extraction from images of salient features like colour and texture, and for content-based browsing and querying in image and video databases is briefly described.

  17. Subsite mapping of enzymes. Depolymerase computer modelling.

    PubMed Central

    Allen, J D; Thoma, J A

    1976-01-01

    We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629

  18. A computational model of cuneothalamic projection neurons.

    PubMed

    Sánchez, Eduardo; Barro, Senén; Mariño, Jorge; Canedo, Antonio

    2003-05-01

    The dorsal column nuclei, cuneatus and gracilis, play a fundamental role in the processing and integration of somesthetic ascending information. Intracellular and patch-clamp recordings obtained in cat in vivo have shown that cuneothalamic projection neurons present two modes of activity: oscillatory and tonic (Canedo et al 1998 Neuroscience 84 603-17). The former is the basis of generating, in sleep and anaesthetized states, slow, delta and spindle rhythms under the control of the cerebral cortex (Mariño et al 2000 Neuroscience 95 657-73). The latter is needed, during wakefulness, to process somesthetic information in real time. To study this behaviour we have developed the first realistic computational model of the cuneothalamic projection neurons. The modelling was guided by experimental recordings, which suggest the existence of hyperpolarization-activated inward currents, transient low- and high-threshold calcium currents, and calcium-activated potassium currents. The neuronal responses were simulated during (1) sleep, (2) transition from sleep to wakefulness and (3) wakefulness under both excitatory and inhibitory synaptic input. In wakefulness the model predicts a set of synaptically driven firing modes that could be associated with information processing strategies in the middle cuneate nucleus.

  19. Computational model of heterogeneous heating in melanin

    NASA Astrophysics Data System (ADS)

    Kellicker, Jason; DiMarzio, Charles A.; Kowalski, Gregory J.

    2015-03-01

    Melanin particles often present as an aggregate of smaller melanin pigment granules and have a heterogeneous surface morphology. When irradiated with light within the absorption spectrum of melanin, these heterogeneities produce measurable concentrations of the electric field that result in temperature gradients from thermal effects that are not seen with spherical or ellipsoidal modeling of melanin. Modeling melanin without taking into consideration the heterogeneous surface morphology yields results that underestimate the strongest signals or over{estimate their spatial extent. We present a new technique to image phase changes induced by heating using a computational model of melanin that exhibits these surface heterogeneities. From this analysis, we demonstrate the heterogeneous energy absorption and resulting heating that occurs at the surface of the melanin granule that is consistent with three{photon absorption. Using the three{photon dluorescence as a beacon, we propose a method for detecting the extents of the melanin granule using photothermal microscopy to measure the phase changes resulting from the heating of the melanin.

  20. Computational modelling of microfluidic capillary breakup phenomena

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Sprittles, James; Oliver, Jim

    2013-11-01

    Capillary breakup phenomena occur in microfluidic flows when liquid volumes divide. The fundamental process of breakup is a key factor in the functioning of a number of microfluidic devices such as 3D-Printers or Lab-on-Chip biomedical technologies. It is well known that the conventional model of breakup is singular as pinch-off is approached, but, despite this, theoretical predictions of the global flow on the millimetre-scale appear to agree well with experimental data, at least until the topological change. However, as one approaches smaller scales, where interfacial effects become more dominant, it is likely that such unphysical singularities will influence the global dynamics of the drop formation process. In this talk we develop a computational framework based on the finite element method capable of resolving diverse spatio-temporal scales for the axisymmetric breakup of a liquid jet, so that the pinch-off dynamics can be accurately captured. As well as the conventional model, we discuss the application of the interface formation model to this problem, which allows the pinch-off to be resolved singularity-free, and has already been shown to produce improved flow predictions for related ``singular'' capillary flows.

  1. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  2. Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Cummings, P. T.

    2010-02-08

    This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

  3. A computer model of engineered cardiac monolayers.

    PubMed

    Kim, Jong M; Bursac, Nenad; Henriquez, Craig S

    2010-05-19

    Engineered monolayers created using microabrasion and micropatterning methods have provided a simplified in vitro system to study the effects of anisotropy and fiber direction on electrical propagation. Interpreting the behavior in these culture systems has often been performed using classical computer models with continuous properties. However, such models do not account for the effects of random cell shapes, cell orientations, and cleft spaces inherent in these monolayers on the resulting wavefront conduction. This work presents a novel methodology for modeling a monolayer of cardiac tissue in which the factors governing cell shape, cell-to-cell coupling, and degree of cleft space are not constant but rather are treated as spatially random with assigned distributions. This modeling approach makes it possible to simulate wavefront propagation in a manner analogous to performing experiments on engineered monolayer tissues. Simulated results are compared to previously published measured data from monolayers used to investigate the role of cellular architecture on conduction velocities and anisotropy ratios. We also present an estimate for obtaining the electrical properties from these networks and demonstrate how variations in the discrete cellular architecture affect the macroscopic conductivities. The simulations support the common assumption that under normal ranges of coupling strength, tissues with relatively uniform distributions of cell shapes and connectivity can be represented using continuous models with conductivities derived from random discrete cellular architecture using either global or local estimates. The results also reveal that in the presence of abrupt changes in cell orientation, local estimates of tissue properties predict smoother changes in conductivity that may not adequately predict the discrete nature of propagation at the transition sites. PMID:20441739

  4. Gravothermal Star Clusters - Theory and Computer Modelling

    NASA Astrophysics Data System (ADS)

    Spurzem, Rainer

    2010-11-01

    In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.

  5. Improvement performance of secondary clarifiers by a computational fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Ghawi, Ali G.; Kriš, J.

    2011-12-01

    Secondary clarifier is one of the most commonly used unit operations in wastewater treatment plants. It is customarily designed to achieve the separation of solids from biologically treated effluents through the clarification of biological solids and the thickening of sludge. As treatment plants receive increasingly high wastewater flows, conventional sedimentation tanks suffer from overloading problems, which result in poor performance. Modification of inlet baffles through the use of an energy dissipating inlet (EDI) was proposed to enhance the performance in the circular clarifiers at the Al-Dewanyia wastewater treatment plant. A 3-dimensional fully mass conservative clarifier model, based on modern computational fluid dynamics theory, was applied to evaluate the proposed tank modification and to estimate the maximum capacity of the existing and modified clarifiers. A Computational Fluid Dynamics (CFD) model was formulated to describe the tank is performance, and design parameters were obtained based on the experimental results. The study revealed that velocity and (suspended solids) SS is a better parameter than TS (total solids), (Biochemical Oxygen Demand) BOD, (Chemical Oxygen Demand) COD to evaluate the performance of sedimentation tanks and that the removal efficiencies of the suspended solids, biochemical oxygen demand, and chemical oxygen demand were higher in the baffle.

  6. Three-dimensional Computational Fluid Dynamics (CFD) modeling of dry spent nuclear fuel storage canisters

    SciTech Connect

    Lee, S.Y.

    1997-06-01

    One of the interim storage configurations being considered for aluminum-clad foreign research reactor fuel, such as the Material and Testing Reactor (MTR) design, is in a dry storage facility. To support design studies of storage options, a computational and experimental program was conducted at the Savannah River Site (SRS). The objective was to develop computational fluid dynamics (CFD) models which would be benchmarked using data obtained from a full scale heat transfer experiment conducted in the SRS Experimental Thermal Fluids Laboratory. The current work documents the CFD approach and presents comparison of results with experimental data. CFDS-FLOW3D (version 3.3) CFD code has been used to model the 3-dimensional convective velocity and temperature distributions within a single dry storage canister of MTR fuel elements. For the present analysis, the Boussinesq approximation was used for the consideration of buoyancy-driven natural convection. Comparison of the CFD code can be used to predict reasonably accurate flow and thermal behavior of a typical foreign research reactor fuel stored in a dry storage facility.

  7. Water uptake by a maize root system - An explicit numerical 3-dimensional simulation.

    NASA Astrophysics Data System (ADS)

    Leitner, Daniel; Schnepf, Andrea; Klepsch, Sabine; Roose, Tiina

    2010-05-01

    Water is one of the most important resources for plant growth and function. An accurate modelling of the unsaturated flow is not only substantial to predict water uptake but also important to describe nutrient movement regarding water saturation and transport. In this work we present a model for water uptake. The model includes the simultaneous flow of water inside the soil and inside the root network. Water saturation in the soil volume is described by the Richards equation. Water flow inside the roots' xylem is calculated using the Poiseuille law for water flow in a cylindrical tube. The water saturation in the soil as well as water uptake of the root system is calculated numerically in three dimensions. We study water uptake of a maize plant in a confined pot under different supply scenarios. The main improvement of our approach is that the root surfaces act as spatial boundaries of the soil volume. Therefore water influx into the root is described by a surface flux instead of a volume flux, which is commonly given by an effective sink term. For the numerical computation we use the following software: The 3-dimensional maize root architecture is created by a root growth model based on L-Systems (Leitner et al 2009). A mesh of the surrounding soil volume is created using the meshing software DistMesh (Persson & Strang 2004). Using this mesh the partial differential equations are solved with the finite element method using Comsol Multiphysics 3.5a. Modelling results are related to accepted water uptake models from literature (Clausnitzer & Hopmans 1994, Roose & Fowler 2004, Javaux et al 2007). This new approach has several advantages. By considering the individual roots it is possible to analyse the influence of overlapping depletion zones due to inter root competition. Furthermore, such simulations can be used to estimate the influence of simplifying assumptions that are made in the development of effective models. The model can be easily combined with a nutrient

  8. Computer modeling of complete IC fabrication process

    NASA Astrophysics Data System (ADS)

    Dutton, Robert W.

    1987-05-01

    The development of fundamental algorithms for process and device modeling as well as novel integration of the tools for advanced Integrated Circuit (IC) technology design is discussed. The development of the first complete 2D process simulator, SUPREM 4, is reported. The algorithms are discussed as well as application to local-oxidation and extrinsic diffusion conditions which occur in CMOS AND BiCMOS technologies. The evolution of 1D (SEDAN) and 2D (PISCES) device analysis is discussed. The application of SEDAN to a variety of non-silicon technologies (GaAs and HgCdTe) are considered. A new multi-window analysis capability for PISCES which exploits Monte Carlo analysis of hot carriers has been demonstrated and used to characterize a variety of silicon MOSFET and GaAs MESFET effects. A parallel computer implementation of PISCES has been achieved using a Hypercube architecture. The PISCES program has been used for a range of important device studies including: latchup, analog switch analysis, MOSFET capacitance studies and bipolar transient device for ECL gates. The program is broadly applicable to RAM and BiCMOS technology analysis and design. In the analog switch technology area this research effort has produced a variety of important modeling and advances.

  9. Random matrix model of adiabatic quantum computing

    SciTech Connect

    Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.

    2005-05-15

    We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size.

  10. Computational modeling of acute myocardial infarction.

    PubMed

    Sáez, P; Kuhl, E

    2016-01-01

    Myocardial infarction, commonly known as heart attack, is caused by reduced blood supply and damages the heart muscle because of a lack of oxygen. Myocardial infarction initiates a cascade of biochemical and mechanical events. In the early stages, cardiomyocytes death, wall thinning, collagen degradation, and ventricular dilation are the immediate consequences of myocardial infarction. In the later stages, collagenous scar formation in the infarcted zone and hypertrophy of the non-infarcted zone are auto-regulatory mechanisms to partly correct for these events. Here we propose a computational model for the short-term adaptation after myocardial infarction using the continuum theory of multiplicative growth. Our model captures the effects of cell death initiating wall thinning, and collagen degradation initiating ventricular dilation. Our simulations agree well with clinical observations in early myocardial infarction. They represent a first step toward simulating the progression of myocardial infarction with the ultimate goal to predict the propensity toward heart failure as a function of infarct intensity, location, and size. PMID:26583449

  11. An integrative computational model of multiciliary beating.

    PubMed

    Yang, Xingzhou; Dillon, Robert H; Fauci, Lisa J

    2008-05-01

    The coordinated beating of motile cilia is responsible for ovum transport in the oviduct, transport of mucus in the respiratory tract, and is the basis of motility in many single-celled organisms. The beating of a single motile cilium is achieved by the ATP-driven activation cycles of thousands of dynein molecular motors that cause neighboring microtubule doublets within the ciliary axoneme to slide relative to each other. The precise nature of the spatial and temporal coordination of these individual motors is still not completely understood. The emergent geometry and dynamics of ciliary beating is a consequence of the coupling of these internal force-generating motors, the passive elastic properties of the axonemal structure, and the external viscous, incompressible fluid. Here, we extend our integrative model of a single cilium that couples internal force generation with the surrounding fluid to the investigation of multiciliary interaction. This computational model allows us to predict the geometry of beating, along with the detailed description of the time-dependent flow field both near and away from the cilia. We show that synchrony and metachrony can, indeed, arise from hydrodynamic coupling. We also investigate the effects of viscosity and neighboring cilia on ciliary beat frequency. Moreover, since we have precise flow information, we also measure the dependence of the total flow pumped per cilium per beat upon parameters such as viscosity and ciliary spacing.

  12. Computer-generated animal model stimuli.

    PubMed

    Woo, Kevin L

    2007-01-01

    Communication between animals is diverse and complex. Animals may communicate using auditory, seismic, chemosensory, electrical, or visual signals. In particular, understanding the constraints on visual signal design for communication has been of great interest. Traditional methods for investigating animal interactions have used basic observational techniques, staged encounters, or physical manipulation of morphology. Less intrusive methods have tried to simulate conspecifics using crude playback tools, such as mirrors, still images, or models. As technology has become more advanced, video playback has emerged as another tool in which to examine visual communication (Rosenthal, 2000). However, to move one step further, the application of computer-animation now allows researchers to specifically isolate critical components necessary to elicit social responses from conspecifics, and manipulate these features to control interactions. Here, I provide detail on how to create an animation using the Jacky dragon as a model, but this process may be adaptable for other species. In building the animation, I elected to use Lightwave 3D to alter object morphology, add texture, install bones, and provide comparable weight shading that prevents exaggerated movement. The animation is then matched to select motor patterns to replicate critical movement features. Finally, the sequence must rendered into an individual clip for presentation. Although there are other adaptable techniques, this particular method had been demonstrated to be effective in eliciting both conspicuous and social responses in staged interactions.

  13. Computational modeling of composite material fires.

    SciTech Connect

    Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.

    2010-10-01

    condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.

  14. Computational and Modeling Strategies for Cell Motility

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Yang, Xiaofeng; Adalsteinsson, David; Elston, Timothy C.; Jacobson, Ken; Kapustina, Maryna; Forest, M. Gregory

    A predictive simulation of the dynamics of a living cell remains a fundamental modeling and computational challenge. The challenge does not even make sense unless one specifies the level of detail and the phenomena of interest, whether the focus is on near-equilibrium or strongly nonequilibrium behavior, and on localized, subcellular, or global cell behavior. Therefore, choices have to be made clear at the outset, ranging from distinguishing between prokaryotic and eukaryotic cells, specificity within each of these types, whether the cell is "normal," whether one wants to model mitosis, blebs, migration, division, deformation due to confined flow as with red blood cells, and the level of microscopic detail for any of these processes. The review article by Hoffman and Crocker [48] is both an excellent overview of cell mechanics and an inspiration for our approach. One might be interested, for example, in duplicating the intricate experimental details reported in [43]: "actin polymerization periodically builds a mechanical link, the lamellipodium, connecting myosin motors with the initiation of adhesion sites, suggesting that the major functions driving motility are coordinated by a biomechanical process," or to duplicate experimental evidence of traveling waves in cells recovering from actin depolymerization [42, 35]. Modeling studies of lamellipodial structure, protrusion, and retraction behavior range from early mechanistic models [84] to more recent deterministic [112, 97] and stochastic [51] approaches with significant biochemical and structural detail. Recent microscopic-macroscopic models and algorithms for cell blebbing have been developed by Young and Mitran [116], which update cytoskeletal microstructure via statistical sampling techniques together with fluid variables. Alternatively, whole cell compartment models (without spatial details) of oscillations in spreading cells have been proposed [35, 92, 109] which show positive and negative feedback

  15. Computational modeling of solid oxide fuel cell

    NASA Astrophysics Data System (ADS)

    Penmetsa, Satish Kumar

    In the ongoing search for alternative and environmentally friendly power generation facilities, the solid oxide fuel cell (SOFC) is considered one of the prime candidates for the next generation of energy conversion devices due to its capability to provide environmentally friendly and highly efficient power generation. Moreover, SOFCs are less sensitive to composition of fuel as compared to other types of fuel cells, and internal reforming of the hydrocarbon fuel cell can be performed because of higher operating temperature range of 700°C--1000°C. This allows us to use different types of hydrocarbon fuels in SOFCs. The objective of this study is to develop a three-dimensional computational model for the simulation of a solid oxide fuel cell unit to analyze the complex internal transport mechanisms and sensitivity of the cell with different operating conditions, and also to develop SOFC with higher operating current density with a more uniform gas distributions in the electrodes and with lower ohmic losses. This model includes mass transfer processes due to convection and diffusion in the gas flow channels based on the Navier-Stokes equations as well as combined diffusion and advection in electrodes using Brinkman's hydrodynamic equation and associated electrochemical reactions in the trilayer of the SOFC. Gas transport characteristics in terms of three-dimensional spatial distributions of reactant gases and their effects on electrochemical reactions at the electrode-electrolyte interface, and in the resulting polarizations, are evaluated for varying pressure conditions. Results show the significance of the Brinkman's hydrodynamic model in electrodes to achieve more uniform gas concentration distributions while using a higher operating pressure and over a higher range of operating current densities.

  16. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    SciTech Connect

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  17. ACTORS: A model of concurrent computation in distributed systems

    SciTech Connect

    Agha, G.

    1986-01-01

    The transition from sequential to parallel computation is an area of critical concern in today's computer technology, particularly in architecture, programming languages, systems, and artificial intelligence. This book addresses issues in concurrency, and by producing both a syntactic definition and a denotational model of Hewitt's actor paradigm - a model of computation specifically aimed at constructing and analyzing distributed large-scale parallel systems - it advances the understanding of parallel computation.

  18. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  19. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  20. Precise orbit computation and sea surface modeling

    NASA Technical Reports Server (NTRS)

    Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.

    1991-01-01

    The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.

  1. Geometric modeling for computer aided design

    NASA Technical Reports Server (NTRS)

    Schwing, James L.; Olariu, Stephen

    1995-01-01

    The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.

  2. Computational modeling of epidural cortical stimulation

    NASA Astrophysics Data System (ADS)

    Wongsarnpigoon, Amorn; Grill, Warren M.

    2008-12-01

    Epidural cortical stimulation (ECS) is a developing therapy to treat neurological disorders. However, it is not clear how the cortical anatomy or the polarity and position of the electrode affects current flow and neural activation in the cortex. We developed a 3D computational model simulating ECS over the precentral gyrus. With the electrode placed directly above the gyrus, about half of the stimulus current flowed through the crown of the gyrus while current density was low along the banks deep in the sulci. Beneath the electrode, neurons oriented perpendicular to the cortical surface were depolarized by anodic stimulation, and neurons oriented parallel to the boundary were depolarized by cathodic stimulation. Activation was localized to the crown of the gyrus, and neurons on the banks deep in the sulci were not polarized. During regulated voltage stimulation, the magnitude of the activating function was inversely proportional to the thickness of the CSF and dura. During regulated current stimulation, the activating function was not sensitive to the thickness of the dura but was slightly more sensitive than during regulated voltage stimulation to the thickness of the CSF. Varying the width of the gyrus and the position of the electrode altered the distribution of the activating function due to changes in the orientation of the neurons beneath the electrode. Bipolar stimulation, although often used in clinical practice, reduced spatial selectivity as well as selectivity for neuron orientation.

  3. Structural computational modeling of RNA aptamers

    PubMed Central

    Xu, Xiaojun; Dickey, David D.; Chen, Shi-Jie; Giangrande, Paloma H.

    2016-01-01

    RNA aptamers represent an emerging class of biologics that can be easily adapted for personalized and precision medicine. Several therapeutic aptamers with desirable binding and functional properties have been developed and evaluated in preclinical studies over the past 25 years. However, for the majority of these aptamers, their clinical potential has yet to be realized. A significant hurdle to the clinical adoption of this novel class of biologicals is the limited information on their secondary and tertiary structure. Knowledge of the RNA’s structure would greatly facilitate and expedite the post-selection optimization steps required for translation, including truncation (to reduce costs of manufacturing), chemical modification (to enhance stability and improve safety) and chemical conjugation (to improve drug properties for combinatorial therapy). Here we describe a structural computational modeling methodology that when coupled to a standard functional assay, can be used to determine key sequence and structural motifs of an RNA aptamer. We applied this methodology to enable the truncation of an aptamer to prostate specific membrane antigen (PSMA) with great potential for targeted therapy that had failed previous truncation attempts. This methodology can be easily applied to optimize other aptamers with therapeutic potential. PMID:26972787

  4. Computational modeling for eco engineering: Making the connections between engineering and ecology (Invited)

    NASA Astrophysics Data System (ADS)

    Bowles, C.

    2013-12-01

    Ecological engineering, or eco engineering, is an emerging field in the study of integrating ecology and engineering, concerned with the design, monitoring, and construction of ecosystems. According to Mitsch (1996) 'the design of sustainable ecosystems intends to integrate human society with its natural environment for the benefit of both'. Eco engineering emerged as a new idea in the early 1960s, and the concept has seen refinement since then. As a commonly practiced field of engineering it is relatively novel. Howard Odum (1963) and others first introduced it as 'utilizing natural energy sources as the predominant input to manipulate and control environmental systems'. Mtisch and Jorgensen (1989) were the first to define eco engineering, to provide eco engineering principles and conceptual eco engineering models. Later they refined the definition and increased the number of principles. They suggested that the goals of eco engineering are: a) the restoration of ecosystems that have been substantially disturbed by human activities such as environmental pollution or land disturbance, and b) the development of new sustainable ecosystems that have both human and ecological values. Here a more detailed overview of eco engineering is provided, particularly with regard to how engineers and ecologists are utilizing multi-dimensional computational models to link ecology and engineering, resulting in increasingly successful project implementation. Descriptions are provided pertaining to 1-, 2- and 3-dimensional hydrodynamic models and their use at small- and large-scale applications. A range of conceptual models that have been developed to aid the in the creation of linkages between ecology and engineering are discussed. Finally, several case studies that link ecology and engineering via computational modeling are provided. These studies include localized stream rehabilitation, spawning gravel enhancement on a large river system, and watershed-wide floodplain modeling of

  5. Modelling, abstraction, and computation in systems biology: A view from computer science.

    PubMed

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology.

  6. Predictive Capability Maturity Model for computational modeling and simulation.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  7. Learning Anatomy: Do New Computer Models Improve Spatial Understanding?

    ERIC Educational Resources Information Center

    Garg, Amit; Norman, Geoff; Spero, Lawrence; Taylor, Ian

    1999-01-01

    Assesses desktop-computer models that rotate in virtual three-dimensional space. Compares spatial learning with a computer carpal-bone model horizontally rotating at 10-degree views with the same model rotating at 90-degree views. (Author/CCM)

  8. Thermal crosstalk in 3-dimensional RRAM crossbar array.

    PubMed

    Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming

    2015-01-01

    High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation. PMID:26310537

  9. A Novel 3-Dimensional Approach for Cardiac Regeneration

    PubMed Central

    Munarin, F.; Coulombe, K.L.K.

    2016-01-01

    Ischemic heart diseases, such as coronary artery disease and microvascular disease, are cardiovascular pathologies that cause reduced blood supply to the heart muscle. Acute and chronic ischemia cause cardiomyocytes to die, and these cells are not naturally replaced as part of the wound healing process in the heart. To promote neovascularization in the wound bed and in implanted engineered tissues, we have developed a collagen–alginate microspheres scaffold intended for local release of drugs and growth factors in order to recruit host endothelial cells to the area and provide them with geometrical cues to form new vessels. Optimization of alginate microspheres included modulation of nitrogen pressure, alginate and CaCl2 concentrations, nozzle size, and velocity of extrusion to achieve monodisperse populations of 100 μm diameter microspheres with protein release over 3 days. In vitro incorporation of fibroblasts in the bulk collagen demonstrated cellular compatibility with embedded alginate microspheres. An in vitro vessel formation assay, performed with human umbilical vein endothelial cells (HUVECs) immobilized in the collagen phase of the collagen–alginate microspheres scaffolds, showed that HUVECs formed networks following the 3-dimensional pattern of the microspheres even in the absence of growth factor. Implantation of acellular collagen–alginate microspheres scaffolds onto healthy rat hearts confirmed the invasion of host cells at one week. Together, these results suggest that the collagen–alginate microspheres scaffold is a viable, tunable therapeutic approach for directing neovascularization in engineered tissues and in the heart after ischemic events. PMID:26736614

  10. Thermal crosstalk in 3-dimensional RRAM crossbar array.

    PubMed

    Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming

    2015-08-27

    High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation.

  11. Thermal crosstalk in 3-dimensional RRAM crossbar array

    PubMed Central

    Sun, Pengxiao; Lu, Nianduan; Li, Ling; Li, Yingtao; Wang, Hong; Lv, Hangbing; Liu, Qi; Long, Shibing; Liu, Su; Liu, Ming

    2015-01-01

    High density 3-dimensional (3D) crossbar resistive random access memory (RRAM) is one of the major focus of the new age technologies. To compete with the ultra-high density NAND and NOR memories, understanding of reliability mechanisms and scaling potential of 3D RRAM crossbar array is needed. Thermal crosstalk is one of the most critical effects that should be considered in 3D crossbar array application. The Joule heat generated inside the RRAM device will determine the switching behavior itself, and for dense memory arrays, the temperature surrounding may lead to a consequent resistance degradation of neighboring devices. In this work, thermal crosstalk effect and scaling potential under thermal effect in 3D RRAM crossbar array are systematically investigated. It is revealed that the reset process is dominated by transient thermal effect in 3D RRAM array. More importantly, thermal crosstalk phenomena could deteriorate device retention performance and even lead to data storage state failure from LRS (low resistance state) to HRS (high resistance state) of the disturbed RRAM cell. In addition, the resistance state degradation will be more serious with continuously scaling down the feature size. Possible methods for alleviating thermal crosstalk effect while further advancing the scaling potential are also provided and verified by numerical simulation. PMID:26310537

  12. Chromosome Conformation of Human Fibroblasts Grown in 3-Dimensional Spheroids

    PubMed Central

    Chen, Haiming; Comment, Nicholas; Chen, Jie; Ronquist, Scott; Hero, Alfred; Ried, Thomas; Rajapakse, Indika

    2015-01-01

    In the study of interphase chromosome organization, genome-wide chromosome conformation capture (Hi-C) maps are often generated using 2-dimensional (2D) monolayer cultures. These 2D cells have morphological deviations from cells that exist in 3-dimensional (3D) tissues in vivo, and may not maintain the same chromosome conformation. We used Hi-C maps to test the extent of differences in chromosome conformation between human fibroblasts grown in 2D cultures and those grown in 3D spheroids. Significant differences in chromosome conformation were found between 2D cells and those grown in spheroids. Intra-chromosomal interactions were generally increased in spheroid cells, with a few exceptions, while inter-chromosomal interactions were generally decreased. Overall, chromosomes located closer to the nuclear periphery had increased intra-chromosomal contacts in spheroid cells, while those located more centrally had decreased interactions. This study highlights the necessity to conduct studies on the topography of the interphase nucleus under conditions that mimic an in vivo environment. PMID:25738643

  13. Computational Modeling of Magnetically Actuated Propellant Orientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.

    1996-01-01

    sufficient performance to support cryogenic propellant management tasks. In late 1992, NASA MSFC began a new investigation in this technology commencing with the design of the Magnetically-Actuated Propellant Orientation (MAPO) experiment. A mixture of ferrofluid and water is used to simulate the paramagnetic properties of LOX and the experiment is being flown on the KC-135 aircraft to provide a reduced gravity environment. The influence of a 0.4 Tesla ring magnet on flow into and out of a subscale Plexiglas tank is being recorded on video tape. The most efficient approach to evaluating the feasibility of MAPO is to compliment the experimental program with development of a computational tool to model the process of interest. The goal of the present research is to develop such a tool. Once confidence in its fidelity is established by comparison to data from the MAPO experiment, it can be used to assist in the design of future experiments and to study the parameter space of the process. Ultimately, it is hoped that the computational model can serve as a design tool for full-scale spacecraft applications.

  14. Idealized computational models for auditory receptive fields.

    PubMed

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  15. Computer modeling of a convective steam superheater

    NASA Astrophysics Data System (ADS)

    Trojan, Marcin

    2015-03-01

    Superheater is for generating superheated steam from the saturated steam from the evaporator outlet. In the case of pulverized coal fired boiler, a relatively small amount of ash causes problems with ash fouling on the heating surfaces, including the superheaters. In the convection pass of the boiler, the flue gas temperature is lower and ash deposits can be loose or sintered. Ash fouling not only reduces heat transfer from the flue gas to the steam, but also is the cause of a higher pressure drop on the flue gas flow path. In the case the pressure drop is greater than the power consumed by the fan increases. If the superheater surfaces are covered with ash than the steam temperature at the outlet of the superheater stages falls, and the flow rates of the water injected into attemperator should be reduced. There is also an increase in flue gas temperature after the different stages of the superheater. Consequently, this leads to a reduction in boiler efficiency. The paper presents the results of computational fluid dynamics simulations of the first stage superheater of both the boiler OP-210M using the commercial software. The temperature distributions of the steam and flue gas along the way they flow together with temperature of the tube walls and temperature of the ash deposits will be determined. The calculated steam temperature is compared with measurement results. Knowledge of these temperatures is of great practical importance because it allows to choose the grade of steel for a given superheater stage. Using the developed model of the superheater to determine its degree of ash fouling in the on-line mode one can control the activation frequency of steam sootblowers.

  16. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  17. Performance Models for Split-execution Computing Systems

    SciTech Connect

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan; Seddiqi, Hadayat; Britt, Keith A; Imam, Neena

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  18. Using Virtual Reality Computer Models to Support Student Understanding of Astronomical Concepts

    ERIC Educational Resources Information Center

    Barnett, Michael; Yamagata-Lynch, Lisa; Keating, Tom; Barab, Sasha A.; Hay, Kenneth E.

    2005-01-01

    The purpose of this study was to examine how 3-dimensional (3-D) models of the Solar System supported student development of conceptual understandings of various astronomical phenomena that required a change in frame of reference. In the course described in this study, students worked in teams to design and construct 3-D virtual reality computer…

  19. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  20. The emerging role of cloud computing in molecular modelling.

    PubMed

    Ebejer, Jean-Paul; Fulle, Simone; Morris, Garrett M; Finn, Paul W

    2013-07-01

    There is a growing recognition of the importance of cloud computing for large-scale and data-intensive applications. The distinguishing features of cloud computing and their relationship to other distributed computing paradigms are described, as are the strengths and weaknesses of the approach. We review the use made to date of cloud computing for molecular modelling projects and the availability of front ends for molecular modelling applications. Although the use of cloud computing technologies for molecular modelling is still in its infancy, we demonstrate its potential by presenting several case studies. Rapid growth can be expected as more applications become available and costs continue to fall; cloud computing can make a major contribution not just in terms of the availability of on-demand computing power, but could also spur innovation in the development of novel approaches that utilize that capacity in more effective ways.

  1. Evaluation of aerothermal modeling computer programs

    NASA Technical Reports Server (NTRS)

    Hsieh, K. C.; Yu, S. T.

    1987-01-01

    Various computer programs based upon the SIMPLE or SIMPLER algorithm were studied and compared for numerical accuracy, efficiency, and grid dependency. Four two-dimensional and one three-dimensional code originally developed by a number of research groups were considered. In general, the accuracy and computational efficieny of these TEACH type programs were improved by modifying the differencing schemes and their solvers. A brief description of each program is given. Error reduction, spline flux and second upwind differencing programs are covered.

  2. Los Alamos CCS (Center for Computer Security) formal computer security model

    SciTech Connect

    Dreicer, J.S.; Hunteman, W.J. )

    1989-01-01

    This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell-Lapadula abstract sets of objects and subjects. 5 refs.

  3. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Jerzy Bernholc

    2011-02-03

    will some day reach a miniaturization limit, forcing designers of Si-based electronics to pursue increased performance by other means. Any other alternative approach would have the unenviable task of matching the ability of Si technology to pack more than a billion interconnected and addressable devices on a chip the size of a thumbnail. Nevertheless, the prospects of developing alternative approaches to fabricate electronic devices have spurred an ever-increasing pace of fundamental research. One of the promising possibilities is molecular electronics (ME), self-assembled molecular-based electronic systems composed of single-molecule devices in ultra dense, ultra fast molecular-sized components. This project focused on developing accurate, reliable theoretical modeling capabilities for describing molecular electronics devices. The participants in the project are given in Table 1. The primary outcomes of this fundamental computational science grant are publications in the open scientific literature. As listed below, 62 papers have been published from this project. In addition, the research has also been the subject of more than 100 invited talks at conferences, including several plenary or keynote lectures. Many of the goals of the original proposal were completed. Specifically, the multi-disciplinary group developed a unique set of capabilities and tools for investigating electron transport in fabricated and self-assembled nanostructures at multiple length and time scales.

  4. Crossover from 2-dimensional to 3-dimensional aggregations of clusters on square lattice substrates

    NASA Astrophysics Data System (ADS)

    Cheng, Yi; Zhu, Yu-Hong; Pan, Qi-Fa; Yang, Bo; Tao, Xiang-Ming; Ye, Gao-Xiang

    2015-11-01

    A Monte Carlo study on the crossover from 2-dimensional to 3-dimensional aggregations of clusters is presented. Based on the traditional cluster-cluster aggregation (CCA) simulation, a modified growth model is proposed. The clusters (including single particles and their aggregates) diffuse with diffusion step length l (1 ≤ l ≤ 7) and aggregate on a square lattice substrate. If the number of particles contained in a cluster is larger than a critical size sc, the particles at the edge of the cluster have a possibility to jump onto the upper layer, which results in the crossover from 2-dimensional to 3-dimensional aggregations. Our simulation results are in good agreement with the experimental findings. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374082 and 11074215), the Science Foundation of Zhejiang Province Department of Education, China (Grant No. Y201018280), the Fundamental Research Funds for Central Universities, China (Grant No. 2012QNA3010), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20100101110005).

  5. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  6. Computational modeling of Krypton Gas Puffs on Z

    NASA Astrophysics Data System (ADS)

    Jennings, C. A.; Ampleford, D. J.; Harvey-Thompson, A. J.; Jones, B.; Hansen, S. B.; Lamppa, D. C.; Jobe, M. R. L.; Strizic, T.; Cuneo, M. E.

    2013-10-01

    Large diameter multi-shell gas puffs rapidly imploded by high current (~20 MA, ~100 ns) on the Z generator are able to produce high-intensity K-shell radiation. Experiments are underway to produce Krypton K-shell emission at ~13 keV, although efficiently radiating at these high photon energies represents a significant challenge. This necessitates the careful design and optimization of the distribution of gas in these loads. To facilitate this we hydro-dynamically model the flow of gas out of the nozzle, before imploding that mass distribution using a 3-dimensional resistive, radiative MHD code (GORGON). Modeled gas profiles have been validated against 2-dimensional interferometric measurements of the gas distribution from these nozzles, and MHD calculations are validated against power, yield, spectral and imaging diagnostics of previous gas puff implosions on Z. This approach enables us to iterate between modeling the implosion and modeling gas flow from the nozzle to optimize radiative output from this combined system. Guided by our implosion calculations we have redesigned the gas nozzle to better optimize Krypton K-shell output and the evaluation of these designs is the subject of ongoing experiments. This work was supported by Sandia National Laboratories, a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's NNSA under contract DE-AC04-94AL85000.

  7. Computer modeling of plasma: Past, present, and future

    SciTech Connect

    Dawson, J.M.

    1995-06-01

    Computer modeling has become a powerful tool for exploring the physics of plasmas. Early computers could handle only relatively simple models but nevertheless showed that these devices could shed a lot of light on the complex physics of plasmas. This capability has proved not only valuable to research but also is becoming an important teaching tool; modeling allows students to experience in concrete ways plasma phenomena which are otherwise presented only abstractly. Present-day plasma models combined with parallel computing provide sufficient power that numerical modeling of laboratory experiments on complex devices has become possible. Two examples of simulations are discussed in some detail: The ``Beat Wave Accelerator`` and the ``Numerical Tokamak.``

  8. Modification of Simple Computer Models as a Laboratory Exercise.

    ERIC Educational Resources Information Center

    Stratton, Lewis P.

    1983-01-01

    Describes an exercise (using Apple microcomputer) which provides an introduction to computer simulation as a biological research tool. Includes examples of students' modeling programs and partial program listings for programs developed by students from an original model. (JN)

  9. Overview of ASC Capability Computing System Governance Model

    SciTech Connect

    Doebling, Scott W.

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  10. A Model for Guiding Undergraduates to Success in Computational Science

    ERIC Educational Resources Information Center

    Olagunju, Amos O.; Fisher, Paul; Adeyeye, John

    2007-01-01

    This paper presents a model for guiding undergraduates to success in computational science. A set of integrated, interdisciplinary training and research activities is outlined for use as a vehicle to increase and produce graduates with research experiences in computational and mathematical sciences. The model is responsive to the development of…

  11. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....

  12. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....

  13. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....

  14. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....

  15. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....

  16. Computational Modeling for Language Acquisition: A Tutorial with Syntactic Islands

    ERIC Educational Resources Information Center

    Pearl, Lisa S.; Sprouse, Jon

    2015-01-01

    Purpose: Given the growing prominence of computational modeling in the acquisition research community, we present a tutorial on how to use computational modeling to investigate learning strategies that underlie the acquisition process. This is useful for understanding both typical and atypical linguistic development. Method: We provide a general…

  17. World Knowledge in Computational Models of Discourse Comprehension

    ERIC Educational Resources Information Center

    Frank, Stefan L.; Koppen, Mathieu; Noordman, Leo G. M.; Vonk, Wietske

    2008-01-01

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, "ad hoc" selection, extraction from…

  18. Using Computational Simulations to Confront Students' Mental Models

    ERIC Educational Resources Information Center

    Rodrigues, R.; Carvalho, P. Simeão

    2014-01-01

    In this paper we show an example of how to use a computational simulation to obtain visual feedback for students' mental models, and compare their predictions with the simulated system's behaviour. Additionally, we use the computational simulation to incrementally modify the students' mental models in order to accommodate new data,…

  19. Generate rigorous pyrolysis models for olefins production by computer

    SciTech Connect

    Klein, M.T.; Broadbelt, L.J.; Grittman, D.H.

    1997-04-01

    With recent advances in the automation of the model-building process for large networks of kinetic equations, it may become feasible to generate computer pyrolysis models for naphthas and gas oil feedstocks. The potential benefit of a rigorous mechanistic model for these relatively complex liquid feedstocks is great, due to diverse characterizations and yield spectrums. An ethane pyrolysis example is used to illustrate the computer generation of reaction mechanism models.

  20. Ambient temperature modelling with soft computing techniques

    SciTech Connect

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni; De Felice, Matteo

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  1. Computer Model Of Focal Plane Array

    NASA Astrophysics Data System (ADS)

    Thvedt, Tom A.; Willoughby, Charles T.; Salcido, Michael M.; Dereniak, Eustace L.

    1987-11-01

    This paper presents a computer program for simulation of an infrared focal plane array. Standard equations are used to support a menu driven program developed for an IBM personal computer. The terms and equations for each section are presented and samples of actual screen displays of a currently available device are also included. The program is intended to provide the user with a better capability to understand and to study the tradeoffs of fabrication parameters versus the focal plane array performance (i.e. CTE, both spatial and temporal dynamic range, MTF, and noise) used for an optical sensor system analysis. Only surface channel devices are considered in the simulation.

  2. A computational model of the human hand 93-ERI-053

    SciTech Connect

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  3. Use of 3-Dimensional Printing for Preoperative Planning in the Treatment of Recurrent Anterior Shoulder Instability

    PubMed Central

    Sheth, Ujash; Theodoropoulos, John; Abouali, Jihad

    2015-01-01

    Recurrent anterior shoulder instability often results from large bony Bankart or Hill-Sachs lesions. Preoperative imaging is essential in guiding our surgical management of patients with these conditions. However, we are often limited to making an attempt to interpret a 3-dimensional (3D) structure using conventional 2-dimensional imaging. In cases in which complex anatomy or bony defects are encountered, this type of imaging is often inadequate. We used 3D printing to produce a solid 3D model of a glenohumeral joint from a young patient with recurrent anterior shoulder instability and complex Bankart and Hill-Sachs lesions. The 3D model from our patient was used in the preoperative planning stages of an arthroscopic Bankart repair and remplissage to determine the depth of the Hill-Sachs lesion and the degree of abduction and external rotation at which the Hill-Sachs lesion engaged. PMID:26759768

  4. Bringing computational models of bone regeneration to the clinic.

    PubMed

    Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans

    2015-01-01

    Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs. PMID:25903383

  5. Computer Modeling and Research in the Classroom

    ERIC Educational Resources Information Center

    Ramos, Maria Joao; Fernandes, Pedro Alexandrino

    2005-01-01

    We report on a computational chemistry course for undergraduate students that successfully incorporated a research project on the design of new contrast agents for magnetic resonance imaging and shift reagents for in vivo NMR. Course outcomes were positive: students were quite motivated during the whole year--they learned what was required of…

  6. CHOREO: An Interactive Computer Model for Dance.

    ERIC Educational Resources Information Center

    Savage, G. J.; Officer, J. M.

    1978-01-01

    Establishes the need for literacy in dance; and describes two dance notation systems: the Massine notation method, and the Labanotation method. The use of interactive computer graphics as a tool for both learning and interpreting dance notation is introduced. (Author/VT)

  7. Structure, function, and behaviour of computational models in systems biology

    PubMed Central

    2013-01-01

    Background Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such “bio-models” necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. Results We present a conceptual framework – the meaning facets – which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model’s components (structure), the meaning of the model’s intended use (function), and the meaning of the model’s dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. Conclusions The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research

  8. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  9. Computer modeling of ORNL storage tank sludge mobilization and mixing

    SciTech Connect

    Terrones, G.; Eyler, L.L.

    1993-09-01

    This report presents and analyzes the results of the computer modeling of mixing and mobilization of sludge in horizontal, cylindrical storage tanks using submerged liquid jets. The computer modeling uses the TEMPEST computational fluid dynamics computer program. The horizontal, cylindrical storage tank configuration is similar to the Melton Valley Storage Tanks (MVST) at Oak Ridge National (ORNL). The MVST tank contents exhibit non-homogeneous, non-Newtonian rheology characteristics. The eventual goals of the simulations are to determine under what conditions sludge mobilization using submerged liquid jets is feasible in tanks of this configuration, and to estimate mixing times required to approach homogeneity of the contents of the tanks.

  10. Implementing and assessing computational modeling in introductory mechanics

    NASA Astrophysics Data System (ADS)

    Caballero, Marcos D.; Kohlmyer, Matthew A.; Schatz, Michael F.

    2012-12-01

    Students taking introductory physics are rarely exposed to computational modeling. In a one-semester large lecture introductory calculus-based mechanics course at Georgia Tech, students learned to solve physics problems using the VPython programming environment. During the term, 1357 students in this course solved a suite of 14 computational modeling homework questions delivered using an online commercial course management system. Their proficiency with computational modeling was evaluated with a proctored assignment involving a novel central force problem. The majority of students (60.4%) successfully completed the evaluation. Analysis of erroneous student-submitted programs indicated that a small set of student errors explained why most programs failed. We discuss the design and implementation of the computational modeling homework and evaluation, the results from the evaluation, and the implications for computational instruction in introductory science, technology, engineering, and mathematics (STEM) courses.

  11. Operation of the computer model for microenvironment atomic oxygen exposure

    NASA Technical Reports Server (NTRS)

    Bourassa, R. J.; Gillis, J. R.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironment atomic oxygen exposure has been developed to extend atomic oxygen modeling capability to include shadowing and reflections. The model uses average exposure conditions established by the direct exposure model and extends the application of these conditions to treat surfaces of arbitrary shape and orientation.

  12. Stress analysis in platform-switching implants: a 3-dimensional finite element study.

    PubMed

    Pellizzer, Eduardo Piza; Verri, Fellippo Ramos; Falcón-Antenucci, Rosse Mary; Júnior, Joel Ferreira Santiago; de Carvalho, Paulo Sérgio Perri; de Moraes, Sandra Lúcia Dantas; Noritomi, Pedro Yoshito

    2012-10-01

    The aim of this study was to evaluate the influence of the platform-switching technique on stress distribution in implant, abutment, and peri-implant tissues, through a 3-dimensional finite element study. Three 3-dimensional mandibular models were fabricated using the SolidWorks 2006 and InVesalius software. Each model was composed of a bone block with one implant 10 mm long and of different diameters (3.75 and 5.00 mm). The UCLA abutments also ranged in diameter from 5.00 mm to 4.1 mm. After obtaining the geometries, the models were transferred to the software FEMAP 10.0 for pre- and postprocessing of finite elements to generate the mesh, loading, and boundary conditions. A total load of 200 N was applied in axial (0°), oblique (45°), and lateral (90°) directions. The models were solved by the software NeiNastran 9.0 and transferred to the software FEMAP 10.0 to obtain the results that were visualized through von Mises and maximum principal stress maps. Model A (implants with 3.75 mm/abutment with 4.1 mm) exhibited the highest area of stress concentration with all loadings (axial, oblique, and lateral) for the implant and the abutment. All models presented the stress areas at the abutment level and at the implant/abutment interface. Models B (implant with 5.0 mm/abutment with 5.0 mm) and C (implant with 5.0 mm/abutment with 4.1 mm) presented minor areas of stress concentration and similar distribution pattern. For the cortical bone, low stress concentration was observed in the peri-implant region for models B and C in comparison to model A. The trabecular bone exhibited low stress that was well distributed in models B and C. Model A presented the highest stress concentration. Model B exhibited better stress distribution. There was no significant difference between the large-diameter implants (models B and C).

  13. Effect of mandibular advancement on the natural position of the head: a preliminary study of 3-dimensional cephalometric analysis.

    PubMed

    Lin, Xiaozhen; Liu, Yanpu; Edwards, Sean P

    2013-10-01

    Our aim was to investigate the potential effect of advancement by bilateral sagittal split osteotomy (BSSO) on the natural position of the head by using 3-dimensional cephalomentric analysis. Seven consecutive patients who had had only BSSO advancement, and had had preoperative and 6-week postoperative cone beam computed tomography (CT) scans, were recruited to this retrospective study. Two variables, SNB and SNC2, were used to indicate the craniomandibular alignment and craniocervical inclination, respectively, in the midsagittal plane. Using 3-dimensional cephalometric analysis software, the SNB and the SNC2 were recorded in volume and measured in the midsagittal plane at 3 independent time-points. The reliability was measured and a paired t test used to assess the significance of differences between the means of SNB and SNC2 before and after operation. The 3-dimensional cephalometric measurement showed good reliability. The SNB was increased as planned in all the mandibles that were advanced, the cervical vertebrae were brought forward after BSSO, and the SNC2 was significantly increased in 6 of the 7 patients. Three-dimensional cephalometric analysis may provide an alternative way of assessing cephalometrics. After BSSO advancement, the natural position of the head changed by increasing the craniocervical inclination in an anteroposterior direction.

  14. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity. PMID:27130577

  15. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    PubMed

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  16. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  17. Computational Modeling of NEXT 2000-Hour Wear Test Results

    NASA Technical Reports Server (NTRS)

    Malone, Shane P.

    2004-01-01

    Ion optics computational models are invaluable tools for the design of ion optics systems. In this study, a new computational model developed by an outside vendor for NASA Glenn Research Center (GRC) is presented. This model is a gun code which has been modified to model the plasma sheaths both upstream and downstream of the ion optics. The model handles multiple species (e.g. singly and doubly-charged ions) and includes a charge-exchange model for erosion estimates. The model uses commercially available solid design and meshing software, allowing high flexibility in ion optics geometric configurations. This computational model is compared to experimental results from the NASA Evolutionary Xenon Thruster (NEXT) 2000-hour wear test, including over-focusing along the edge apertures, pit-and-groove erosion due to charge exchange, and beamlet distortion at the edge of the hole pattern.

  18. Computational Modeling of NEXT 2000-hour Wear Test Results

    NASA Astrophysics Data System (ADS)

    Malone, Shane; Soulas, George

    2004-11-01

    Ion optics computational models are invaluable tools for the design of ion optics systems. In this study, a new computational model developed by an outside vendor for NASA Glenn Research Center (GRC) is presented. This model is a gun code which has been modified to model the plasma sheaths both upstream and downstream of the ion optics. The model handles multiple species (e.g. singly and doubly-charged ions) and includes a charge-exchange model for erosion estimates. The model uses commercially available solid design and meshing software, allowing high flexibility in ion optics geometric configurations. This computational model is compared to experimental results from the NASA Evolutionary Xenon Thruster (NEXT) 2000-hour wear test, including over-focusing along the edge apertures, pit-and-groove erosion due to charge exchange, and beamlet distortion at the edge of the hole pattern.

  19. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    NASA Astrophysics Data System (ADS)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  20. Computational Electromagnetic Modeling of SansEC(Trade Mark) Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Laura J.; Dudley, Kenneth L.; Szatkowski, George N.

    2011-01-01

    This paper describes the preliminary effort to apply computational design tools to aid in the development of an electromagnetic SansEC resonant sensor composite materials damage detection system. The computational methods and models employed on this research problem will evolve in complexity over time and will lead to the development of new computational methods and experimental sensor systems that demonstrate the capability to detect, diagnose, and monitor the damage of composite materials and structures on aerospace vehicles.

  1. A qualitative model for computer-assisted instruction in cardiology.

    PubMed

    Julen, N; Siregar, P; Sinteff, J P; Le Beux, P

    1998-01-01

    CARDIOLAB is an interactive computational framework dedicated to teaching and computer-aided diagnosis in cardiology. The framework embodies models that simulate the heart's electrical activity. They constitute the core of a Computer-Assisted Instruction (CAI) program intended to teach, in a multimedia environment, the concepts underlying rhythmic disorders and cardiac diseases. The framework includes a qualitative model (QM) which is described in this paper. During simulation using QM, dynamic sequences representing impulse formation and conduction processes are produced along with the corresponding qualitative descriptions. The corresponding electrocardiogram (ECG) and ladder diagram are also produced, and thus, both qualitative notions and quantitative facts can be taught via the model. We discuss how qualitative models in particular, and computational models in general can enhance the teaching capability of CAI programs.

  2. ORPHEE 3D: Static and dynamic tridimensional BHA computer models

    SciTech Connect

    Birades, M.

    1986-01-01

    Elf Aquitaine, within an ARTEP research project granted by EEC, has developed two three-dimensional mathematical models to predict the directional behavior of bottom hole assemblies (BHAs). Both models simulate BHAs by finite element methods. The first model describes dynamically their transient behavior step by step during short time intervals which are continuously adjusted to attain the required precision. Displacements and lateral forces, computed for each step, integrate friction against the borehole wall through a sophisticated shock algorithm. The second model computes a static equilibrium of the BHA while assuming simplified friction forces at the contact points between the wellbore and the BHA. The lateral forces and displacements are found to be an average of the highly varying ones computed by the dynamic model and the static computer run is much faster.

  3. A qualitative model for computer-assisted instruction in cardiology.

    PubMed Central

    Julen, N.; Siregar, P.; Sinteff, J. P.; Le Beux, P.

    1998-01-01

    CARDIOLAB is an interactive computational framework dedicated to teaching and computer-aided diagnosis in cardiology. The framework embodies models that simulate the heart's electrical activity. They constitute the core of a Computer-Assisted Instruction (CAI) program intended to teach, in a multimedia environment, the concepts underlying rhythmic disorders and cardiac diseases. The framework includes a qualitative model (QM) which is described in this paper. During simulation using QM, dynamic sequences representing impulse formation and conduction processes are produced along with the corresponding qualitative descriptions. The corresponding electrocardiogram (ECG) and ladder diagram are also produced, and thus, both qualitative notions and quantitative facts can be taught via the model. We discuss how qualitative models in particular, and computational models in general can enhance the teaching capability of CAI programs. Images Figure 1 Figure 2 PMID:9929258

  4. Transient upset models in computer systems

    NASA Technical Reports Server (NTRS)

    Mason, G. M.

    1983-01-01

    Essential factors for the design of transient upset monitors for computers are discussed. The upset is a system level event that is software dependent. It can occur in the program flow, the opcode set, the opcode address domain, the read address domain, and the write address domain. Most upsets are in the program flow. It is shown that simple, external monitors functioning transparently relative to the system operations can be built if a detailed accounting is made of the characteristics of the faults that can happen. Sample applications are provided for different states of the Z-80 and 8085 based system.

  5. Multiscale Modeling in Computational Biomechanics: Determining Computational Priorities and Addressing Current Challenges

    SciTech Connect

    Tawhai, Merryn; Bischoff, Jeff; Einstein, Daniel R.; Erdemir, Ahmet; Guess, Trent; Reinbolt, Jeff

    2009-05-01

    Abstract In this article, we describe some current multiscale modeling issues in computational biomechanics from the perspective of the musculoskeletal and respiratory systems and mechanotransduction. First, we outline the necessity of multiscale simulations in these biological systems. Then we summarize challenges inherent to multiscale biomechanics modeling, regardless of the subdiscipline, followed by computational challenges that are system-specific. We discuss some of the current tools that have been utilized to aid research in multiscale mechanics simulations, and the priorities to further the field of multiscale biomechanics computation.

  6. Three-dimensional cardiac computational modelling: methods, features and applications.

    PubMed

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-01-01

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases. PMID:25928297

  7. Cancer evolution: mathematical models and computational inference.

    PubMed

    Beerenwinkel, Niko; Schwarz, Roland F; Gerstung, Moritz; Markowetz, Florian

    2015-01-01

    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.

  8. Cancer Evolution: Mathematical Models and Computational Inference

    PubMed Central

    Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian

    2015-01-01

    Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804

  9. Computer-Based Simulation Models for Community College Business Students.

    ERIC Educational Resources Information Center

    Kahl, James

    Instructors at Lower Columbia College in Longview, Washington use computer-based simulation models in lower level business administration courses. Prior to use, teachers must select and obtain a simulation, discuss it with campus computer personnel, set an operations schedule, obtain the necessary supplementary material, and test run the program.…

  10. A model for computing at the SSC (Superconducting Super Collider)

    SciTech Connect

    Baden, D. . Dept. of Physics); Grossman, R. . Lab. for Advanced Computing)

    1990-06-01

    High energy physics experiments at the Superconducting Super Collider (SSC) will show a substantial increase in complexity and cost over existing forefront experiments, and computing needs may no longer be met via simple extrapolations from the previous experiments. We propose a model for computing at the SSC based on technologies common in private industry involving both hardware and software. 11 refs., 1 fig.

  11. Computational Morphodynamics: A modeling framework to understand plant growth

    PubMed Central

    Chickarmane, Vijay; Roeder, Adrienne H.K.; Tarr, Paul T.; Cunha, Alexandre; Tobin, Cory; Meyerowitz, Elliot M.

    2014-01-01

    Computational morphodynamics utilizes computer modeling to understand the development of living organisms over space and time. Results from biological experiments are used to construct accurate and predictive models of growth. These models are then used to make novel predictions providing further insight into the processes in question, which can be tested experimentally to either confirm or rule out the validity of the computational models. This review highlights two fundamental issues: (1.) models should span and integrate single cell behavior with tissue development and (2.) the necessity to understand the feedback between mechanics of growth and chemical or molecular signaling. We review different approaches to model plant growth and discuss a variety of model types that can be implemented, with the aim of demonstrating how this methodology can be used, to explore the morphodynamics of plant development. PMID:20192756

  12. A computationally tractable version of the collective model

    NASA Astrophysics Data System (ADS)

    Rowe, D. J.

    2004-05-01

    A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.

  13. Computational model for Halorhodopsin photocurrent kinetics

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Stefanescu, Roxana; Talathi, Sachin

    2013-03-01

    Optogenetics is a rapidly developing novel optical stimulation technique that employs light activated ion channels to excite (using channelrhodopsin (ChR)) or suppress (using halorhodopsin (HR)) impulse activity in neurons with high temporal and spatial resolution. This technique holds enormous potential to externally control activity states in neuronal networks. The channel kinetics of ChR and HR are well understood and amenable for mathematical modeling. Significant progress has been made in recent years to develop models for ChR channel kinetics. To date however, there is no model to mimic photocurrents produced by HR. Here, we report the first model developed for HR photocurrents based on a four-state model of the HR photocurrent kinetics. The model provides an excellent fit (root-mean-square error of 3.1862x10-4, to an empirical profile of experimentally measured HR photocurrents. In combination, mathematical models for ChR and HR photocurrents can provide effective means to design test light based control systems to regulate neural activity, which in turn may have implications for the development of novel light based stimulation paradigms for brain disease control. I would like to thank the University of Florida and the Physics Research Experience for Undergraduates (REU) program, funded through NSF DMR-1156737. This research was also supported through start-up funds provided to Dr. Sachin Talathi

  14. Method and apparatus for imaging through 3-dimensional tracking of protons

    NASA Technical Reports Server (NTRS)

    Ryan, James M. (Inventor); Macri, John R. (Inventor); McConnell, Mark L. (Inventor)

    2001-01-01

    A method and apparatus for creating density images of an object through the 3-dimensional tracking of protons that have passed through the object are provided. More specifically, the 3-dimensional tracking of the protons is accomplished by gathering and analyzing images of the ionization tracks of the protons in a closely packed stack of scintillating fibers.

  15. Utilizing Cloud Computing to Improve Climate Modeling and Studies

    NASA Astrophysics Data System (ADS)

    Li, Z.; Yang, C.; Liu, K.; Sun, M.; XIA, J.; Huang, Q.

    2013-12-01

    Climate studies have become increasingly important due to the global climate change, one of the biggest challenges for the human in the 21st century. Climate data, not only observations data collected from various sensors but also simulated data generated from diverse climate models, are essential for scientists to explore the potential climate change patterns and analyze the complex climate dynamics. Climate modeling and simulation, a critical methodology for simulating the past and predicting the future climate conditions, can produce huge amount of data that contains potentially valuable information for climate studies. However, using modeling method in climate studies poses at least two challenges for scientists. First, running climate models is a computing intensive process, which requires large amounts of computation resources. Second, running climate models is also a data intensive process generating Big geospatial Data (model output), which demands large storage for managing the data and large computing power to process and analyze these data. This presentation introduces a novel framework to tackle the two challenges by 1) running climate models in a cloud environment in an automated fashion, and 2) managing and parallel processing Big model output Data by leveraging cloud computing technologies. A prototype system is developed based on the framework using ModelE as the climate model. Experiment results show that this framework can improve climate modeling in the research cycle by accelerating big data generation (model simulation), big data management (storage and processing) and on demand big data analytics.

  16. A computational model of cardiovascular physiology and heart sound generation.

    PubMed

    Watrous, Raymond L

    2009-01-01

    A computational model of the cardiovascular system is described which provides a framework for implementing and testing quantitative physiological models of heart sound generation. The lumped-parameter cardiovascular model can be solved for the hemodynamic variables on which the heart sound generation process is built. Parameters of the cardiovascular model can be adjusted to represent various normal and pathological conditions, and the acoustic consequences of those adjustments can be explored. The combined model of the physiology of cardiovascular circulation and heart sound generation has promise for application in teaching, training and algorithm development in computer-aided auscultation of the heart.

  17. Advances in Computationally Modeling Human Oral Bioavailability

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2015-01-01

    Although significant progress has been made in experimental high throughput screening (HTS) of ADME (absorption, distribution, metabolism, excretion) and pharmacokinetic properties, the ADME and Toxicity (ADME-Tox) in silico modeling is still indispensable in drug discovery as it can guide us to wisely select drug candidates prior to expensive ADME screenings and clinical trials. Compared to other ADME-Tox properties, human oral bioavailability (HOBA) is particularly important but extremely difficult to predict. In this paper, the advances in human oral bioavailability modeling will be reviewed. Moreover, our deep insight on how to construct more accurate and reliable HOBA QSAR and classification models will also discussed. PMID:25582307

  18. Predictive Computational Modeling of Chromatin Folding

    NASA Astrophysics Data System (ADS)

    di Pierro, Miichele; Zhang, Bin; Wolynes, Peter J.; Onuchic, Jose N.

    In vivo, the human genome folds into well-determined and conserved three-dimensional structures. The mechanism driving the folding process remains unknown. We report a theoretical model (MiChroM) for chromatin derived by using the maximum entropy principle. The proposed model allows Molecular Dynamics simulations of the genome using as input the classification of loci into chromatin types and the presence of binding sites of loop forming protein CTCF. The model was trained to reproduce the Hi-C map of chromosome 10 of human lymphoblastoid cells. With no additional tuning the model was able to predict accurately the Hi-C maps of chromosomes 1-22 for the same cell line. Simulations show unknotted chromosomes, phase separation of chromatin types and a preference of chromatin of type A to sit at the periphery of the chromosomes.

  19. Predictive Models and Computational Toxicology (II IBAMTOX)

    EPA Science Inventory

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  20. Computational modeling and engineering in pediatric and congenital heart disease

    PubMed Central

    Marsden, Alison L.; Feinstein, Jeffrey A.

    2015-01-01

    Purpose of review Recent methodological advances in computational simulations are enabling increasingly realistic simulations of hemodynamics and physiology, driving increased clinical utility. We review recent developments in the use of computational simulations in pediatric and congenital heart disease, describe the clinical impact in modeling in single ventricle patients, and provide an overview of emerging areas. Recent Findings Multiscale modeling combining patient specific hemodynamics with reduced order (i.e. mathematically and computationally simplified) circulatory models has become the defacto standard for modeling local hemodynamics and “global” circulatory physiology. We review recent advances that have enabled faster solutions, discuss new methods, (e.g. fluid structure interaction and uncertainty quantification), which lend realism both computationally and clinically to results, highlight novel computationally-derived surgical methods for single ventricle patients, and discuss areas in which modeling has begun to exert its influence including Kawasaki disease, fetal circulation, tetralogy of Fallot, (and pulmonary tree), and circulatory support. Summary Computational modeling is emerging as a crucial tool for clinical decision-making and evaluation of novel surgical methods and interventions in pediatric cardiology and beyond. Continued development of modeling methods, with an eye towards clinical needs, will enable clinical adoption in a wide range of pediatric and congenital heart diseases. PMID:26262579

  1. Mechanical characterization and computational modeling of gels

    NASA Astrophysics Data System (ADS)

    Santos, Paulo Henrique da Silva

    Soft materials like gels have arisen as key component in a wide range of applications, ranging from rocket propellants to complex materials for biomedical devices and drug delivery. Experimental studies have focused on the characterization of a number of gels involving macromolecules such as proteins and polysaccharides; however the link between the microstructure of these systems with their resulting macroproperties is still lacking. From the experimental point of view, this research describes the rheological behavior of some complex systems using the appropriate rheological constitutive equations. Non-conventional rheological techniques are also considered to describe some fragile systems that are significantly disturbed during testing with conventional instruments. From the computational perspective, this research provides insights on how molecular conformation and interactions affect the rheological properties of colloidal and polymeric gels. Molecular and Brownian Dynamics simulation were performed to get a better understanding on gelation processes and to explore new applications for gelled materials.

  2. Enhanced absorption cycle computer model. Final report

    SciTech Connect

    Grossman, G.; Wilk, M.

    1993-09-01

    Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.

  3. Supersonic jet and crossflow interaction: Computational modeling

    NASA Astrophysics Data System (ADS)

    Hassan, Ez; Boles, John; Aono, Hikaru; Davis, Douglas; Shyy, Wei

    2013-02-01

    The supersonic jet-in-crossflow problem which involves shocks, turbulent mixing, and large-scale vortical structures, requires special treatment for turbulence to obtain accurate solutions. Different turbulence modeling techniques are reviewed and compared in terms of their performance in predicting results consistent with the experimental data. Reynolds-averaged Navier-Stokes (RANS) models are limited in prediction of fuel structure due to their inability to accurately capture unsteadiness in the flow. Large eddy simulation (LES) is not yet practical due to prohibitively large grid requirement near the wall. Hybrid RANS/LES can offer reasonable compromise between accuracy and efficiency. The hybrid models are based on various approaches such as explicit blending of RANS and LES, detached eddy simulation (DES), and filter-based multi-scale models. In particular, they can be used to evaluate the turbulent Schmidt number modeling techniques used in jet-in-crossflow simulations. Specifically, an adaptive approach can be devised by utilizing the information obtained from the resolved field to help assign the value of turbulent Schmidt number in the sub-filter field. The adaptive approach combined with the multi-scale model improves the results especially when highly refined grids are needed to resolve small structures involved in the mixing process.

  4. An analysis of symbolic linguistic computing models in decision making

    NASA Astrophysics Data System (ADS)

    Rodríguez, Rosa M.; Martínez, Luis

    2013-01-01

    It is common that experts involved in complex real-world decision problems use natural language for expressing their knowledge in uncertain frameworks. The language is inherent vague, hence probabilistic decision models are not very suitable in such cases. Therefore, other tools such as fuzzy logic and fuzzy linguistic approaches have been successfully used to model and manage such vagueness. The use of linguistic information implies to operate with such a type of information, i.e. processes of computing with words (CWW). Different schemes have been proposed to deal with those processes, and diverse symbolic linguistic computing models have been introduced to accomplish the linguistic computations. In this paper, we overview the relationship between decision making and CWW, and focus on symbolic linguistic computing models that have been widely used in linguistic decision making to analyse if all of them can be considered inside of the CWW paradigm.

  5. Computational Modeling, Formal Analysis, and Tools for Systems Biology

    PubMed Central

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science. PMID:26795950

  6. COMPUTATION MODELING OF TCDD DISRUPTION OF B CELL TERMINAL DIFFERENTIATION

    EPA Science Inventory

    In this study, we established a computational model describing the molecular circuit underlying B cell terminal differentiation and how TCDD may affect this process by impinging upon various molecular targets.

  7. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  8. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    PubMed

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  9. Computer models and output, Spartan REM: Appendix B

    NASA Technical Reports Server (NTRS)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.

  10. An Instructional Model for Computer Assisted Instruction. Technical Report.

    ERIC Educational Resources Information Center

    Mizenko, Albert J; Evans, Allyn A.

    An instructional model suitable for the implementation of the tutorial mode of a computer-assisted instruction program is described in this report. The general guidelines for the design of the model are presented. Course organization, instructional strategies, and learning paths are discussed. The model provided for the accommodation of high,…

  11. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    ERIC Educational Resources Information Center

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  12. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  13. Model Of Orbital Density Of Air For Computing Drag

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1990-01-01

    Simple, Orbital Density Model for Drag Equations program useful for computing effect of drag over one or more orbits. Mathematical model embodied in program incorporates major changes in density due to solar activity and magnetic activity of Earth. Diurnal (day/night) effects on orbit averaged out. Based on Jacchia daily-average density, evaluated at average time of year. Advantages, right ascension and declination of Sun not needed and computation time much reduced. Written in FORTRAN 77.

  14. Computational model of miniature pulsating heat pipes.

    SciTech Connect

    Martinez, Mario J.; Givler, Richard C.

    2013-01-01

    The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

  15. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  16. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-12-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  17. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  18. Computational Models for Mechanics of Morphogenesis

    PubMed Central

    Wyczalkowski, Matthew A.; Chen, Zi; Filas, Benjamen A.; Varner, Victor D.; Taber, Larry A.

    2012-01-01

    In the developing embryo, tissues differentiate, deform, and move in an orchestrated manner to generate various biological shapes driven by the complex interplay between genetic, epigenetic, and environmental factors. Mechanics plays a key role in regulating and controlling morphogenesis, and quantitative models help us understand how various mechanical forces combine to shape the embryo. Models allow for the quantitative, unbiased testing of physical mechanisms, and when used appropriately, can motivate new experimental directions. This knowledge benefits biomedical researchers who aim to prevent and treat congenital malformations, as well as engineers working to create replacement tissues in the laboratory. In this review, we first give an overview of fundamental mechanical theories for morphogenesis, and then focus on models for specific processes, including pattern formation, gastrulation, neurulation, organogenesis, and wound healing. The role of mechanical feedback in development is also discussed. Finally, some perspectives are given on the emerging challenges in morphomechanics and mechanobiology. PMID:22692887

  19. Computer modeling of electrical performance of detonators

    SciTech Connect

    Furnberg, C.M.; Peevy, G.R.; Brigham, W.P.; Lyons, G.R.

    1995-05-01

    An empirical model of detonator electrical performance which describes the resistance of the exploding bridgewire (EBW) or exploding foil initiator (EFI or slapper) as a function of energy, deposition will be described. This model features many parameters that can be adjusted to obtain a close fit to experimental data. This has been demonstrated using recent experimental data taken with the cable discharge system located at Sandia National Laboratories. This paper will be a continuation of the paper entitled ``Cable Discharge System for Fundamental Detonator Studies`` presented at the 2nd NASA/DOD/DOE Pyrotechnic Workshop.

  20. Computational social network modeling of terrorist recruitment.

    SciTech Connect

    Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

  1. Probabilistic computer model of optimal runway turnoffs

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Preston, O. W.; Summers, L. G.; Nelson, B. A.; Vanderlinden, L.; Mcreynolds, M. C.

    1985-01-01

    Landing delays are currently a problem at major air carrier airports and many forecasters agree that airport congestion will get worse by the end of the century. It is anticipated that some types of delays can be reduced by an efficient optimal runway exist system allowing increased approach volumes necessary at congested airports. A computerized Probabilistic Runway Turnoff Model which locates exits and defines path geometry for a selected maximum occupancy time appropriate for each TERPS aircraft category is defined. The model includes an algorithm for lateral ride comfort limits.

  2. Computer Models Simulate Fine Particle Dispersion

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Through a NASA Seed Fund partnership with DEM Solutions Inc., of Lebanon, New Hampshire, scientists at Kennedy Space Center refined existing software to study the electrostatic phenomena of granular and bulk materials as they apply to planetary surfaces. The software, EDEM, allows users to import particles and obtain accurate representations of their shapes for modeling purposes, such as simulating bulk solids behavior, and was enhanced to be able to more accurately model fine, abrasive, cohesive particles. These new EDEM capabilities can be applied in many industries unrelated to space exploration and have been adopted by several prominent U.S. companies, including John Deere, Pfizer, and Procter & Gamble.

  3. Implementation of CCNUGrid-based Computational Environment for Molecular Modeling

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Luo, Changhua; Ren, Yanliang; Wan, Jian; Xu, Xin

    2007-12-01

    Grid computing technology has being regarded as one of the most promising solutions for the tremendous requirement of computing resources in the field of molecular modeling up to date. Contrast to building a more and more powerful super-computer with novel hardware in a local network, grid technology enable us, in principle, to integrate various previous and present computing resources located in different location into a computing platform as a whole. As a case demonstration, we reported herein that a campus grid entitled CCNUGrid was implemented with grid middleware, consisting of four local computing networks distributed in College of Chemistry, College of Physics, Center for Network, and Center for Education Information Technology and Engineering, respectively, at Central China Normal University. Visualization functions of monitoring computer machines in each local network, monitoring job processing flow, and monitoring computational results were realized in this campus grid-based computational environment, in addition to the conventional components of grid architecture: universal portal, task management, computing node and security. In the last section of this paper, a molecular docking-based virtual screening study was performed at the CCNUGrid, as one example of CCNUGrid applications.

  4. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  5. Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization

    NASA Astrophysics Data System (ADS)

    Kamali, M.; Ponnambalam, K.; Soulis, E. D.

    2007-07-01

    In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.

  6. A Computer Model for Direct Carbonate Fuel Cells

    SciTech Connect

    Ding, J.; Patel, P.S.; Farooque, M.; Maru, H.C.

    1997-04-01

    A 3-D computer model, describing fluid flow, heat and mass transfer, and chemical and electrochemical reaction processes, has been developed for guiding the direct carbonate fuel cell (DFC) stack design. This model is able to analyze the direct internal reforming (DIR) as well as the integrated IIR (indirect internal reforming)-DIR designs. Reasonable agreements between computed and fuel cell tested results, such as flow variations, temperature distributions, cell potentials, and exhaust gas compositions as well as methane conversions, were obtained. Details of the model and comparisons of the modeling results with experimental DFC stack data are presented in the paper.

  7. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  8. Scratch as a computational modelling tool for teaching physics

    NASA Astrophysics Data System (ADS)

    Lopez, Victor; Hernandez, Maria Isabel

    2015-05-01

    The Scratch online authoring tool, which features a simple programming language that has been adapted to primary and secondary students, is being used more and more in schools as it offers students and teachers the opportunity to use a tool to build scientific models and evaluate their behaviour, just as can be done with computational modelling programs. In this article, we briefly discuss why Scratch could be a useful tool for computational modelling in the primary or secondary physics classroom, and we present practical examples of how it can be used to build a model.

  9. Computational modeling of nuclear thermal rockets

    NASA Technical Reports Server (NTRS)

    Peery, Steven D.

    1993-01-01

    The topics are presented in viewgraph form and include the following: rocket engine transient simulation (ROCETS) system; ROCETS performance simulations composed of integrated component models; ROCETS system architecture significant features; ROCETS engineering nuclear thermal rocket (NTR) modules; ROCETS system easily adapts Fortran engineering modules; ROCETS NTR reactor module; ROCETS NTR turbomachinery module; detailed reactor analysis; predicted reactor power profiles; turbine bypass impact on system; and ROCETS NTR engine simulation summary.

  10. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  11. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Gregory Beylkin

    2012-03-23

    Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.

  12. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition.

  13. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition. PMID:27094300

  14. Computational Modeling of Lipid Metabolism in Yeast

    PubMed Central

    Schützhold, Vera; Hahn, Jens; Tummler, Katja; Klipp, Edda

    2016-01-01

    Lipid metabolism is essential for all major cell functions and has recently gained increasing attention in research and health studies. However, mathematical modeling by means of classical approaches such as stoichiometric networks and ordinary differential equation systems has not yet provided satisfactory insights, due to the complexity of lipid metabolism characterized by many different species with only slight differences and by promiscuous multifunctional enzymes. Here, we present an object-oriented stochastic model approach as a way to cope with the complex lipid metabolic network. While all lipid species are treated objects in the model, they can be modified by the respective converting reactions based on reaction rules, a hybrid method that integrates benefits of agent-based and classical stochastic simulation. This approach allows to follow the dynamics of all lipid species with different fatty acids, different degrees of saturation and different headgroups over time and to analyze the effect of parameter changes, potential mutations in the catalyzing enzymes or provision of different precursors. Applied to yeast metabolism during one cell cycle period, we could analyze the distribution of all lipids to the various membranes in time-dependent manner. The presented approach allows to efficiently treat the complexity of cellular lipid metabolism and to derive conclusions on the time- and location-dependent distributions of lipid species and their properties such as saturation. It is widely applicable, easily extendable and will provide further insights in healthy and diseased states of cell metabolism. PMID:27730126

  15. Enabling Grid Computing resources within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  16. The role of computer modelling in participatory integrated assessments

    SciTech Connect

    Siebenhuener, Bernd . E-mail: bernd.siebenhuener@uni-oldenburg.de; Barth, Volker . E-mail: volker.barth@uni-oldenburg.de

    2005-05-15

    In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes.

  17. Breakthroughs in computational modeling of cartilage regeneration in perfused bioreactors.

    PubMed

    Raimondi, Manuela T; Causin, Paola; Mara, Andrea; Nava, Michele; Laganà, Matteo; Sacco, Riccardo

    2011-12-01

    We report about two specific breakthroughs, relevant to the mathematical modeling and numerical simulation of tissue growth in the context of cartilage tissue engineering in vitro. The proposed models are intended to form the building blocks of a bottom-up multiscale analysis of tissue growth, the idea being that a full microscale analysis of the construct, a 3-D partial differential equation (PDE) problem with internal moving boundaries, is computationally unaffordable. We propose to couple a PDE microscale model of a single functional tissue subunit with the information computed at the macroscale by 2-D-0-D models of reduced computational cost. Preliminary results demonstrate the effectiveness of the proposed models in describing the interplay among interstitial perfusion flow, nutrient delivery, and consumption and tissue growth in realistic scaffold geometries.

  18. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. A cognitive model for problem solving in computer science

    NASA Astrophysics Data System (ADS)

    Parham, Jennifer R.

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in solving them. Approaching assessment from this perspective would reveal potential errors leading to incorrect solutions. This dissertation proposes a model describing how people solve computational problems by storing, retrieving, and manipulating information and knowledge. It describes how metacognition interacts with schemata representing conceptual and procedural knowledge, as well as with the external sources of information that might be needed to arrive at a solution. Metacognition includes higher-order, executive processes responsible for controlling and monitoring schemata, which in turn represent the algorithmic knowledge needed for organizing and adapting concepts to a specific domain. The model illustrates how metacognitive processes interact with the knowledge represented by schemata as well as the information from external sources. This research investigates the differences in the way computer science novices use their metacognition and schemata to solve a computer programming problem. After J. Parham and L. Gugerty reached an 85% reliability for six metacognitive processes and six domain-specific schemata for writing a computer program, the resulting vocabulary provided the foundation for supporting the existence of and the interaction between metacognition, schemata, and external sources of information in computer programming. Overall, the participants in this research used their schemata 6% more than their metacognition and their metacognitive processes to control and monitor their schemata used to write a computer program. This research has potential implications in computer science education and software

  20. Computer model of cardiovascular control system responses to exercise

    NASA Technical Reports Server (NTRS)

    Croston, R. C.; Rummel, J. A.; Kay, F. J.

    1973-01-01

    Approaches of systems analysis and mathematical modeling together with computer simulation techniques are applied to the cardiovascular system in order to simulate dynamic responses of the system to a range of exercise work loads. A block diagram of the circulatory model is presented, taking into account arterial segments, venous segments, arterio-venous circulation branches, and the heart. A cardiovascular control system model is also discussed together with model test results.

  1. Computer assisted modeling of ethyl sulfate pharmacokinetics.

    PubMed

    Schmitt, Georg; Halter, Claudia C; Aderjan, Rolf; Auwaerter, Volker; Weinmann, Wolfgang

    2010-01-30

    For 12 volunteers of a drinking experiment the concentration-time-courses of ethyl sulfate (EtS) and ethanol were simulated and fitted to the experimental data. The concentration-time-courses were described with the same mathematical model as previously used for ethyl glucuronide (EtG). The kinetic model based on the following assumptions and simplifications: a velocity constant k(form) for the first order formation of ethyl sulfate from ethanol and an exponential elimination constant k(el). The mean values (and standard deviations) obtained for k(form) and k(el) were 0.00052 h(-1) (0.00014) and 0.561 h(-1) (0.131), respectively. Using the ranges of these parameters it is possible to calculate minimum and maximum serum concentrations of EtS based on stated ethanol doses and drinking times. The comparison of calculated and measured concentrations can prove the plausibility of alleged ethanol consumption and add evidence to the retrospective calculation of ethanol concentrations based on EtG concentrations. PMID:19913378

  2. A computational model of craving and obsession.

    PubMed

    Redish, A David; Johnson, Adam

    2007-05-01

    If addictions and problematic behaviors arise from interactions between drugs, reward sequences, and natural learning sytems, then an explanation of clinically problematic conditions (such as the self-administration of drugs or problem gambling) requires an understanding of the neural systems that have evolved to allow an agent to make decisions. We hypothesize a unified decision-making system consisting of three components-a situation recognition system, a flexible, planning-capable system, and an inflexible, habit-like system. In this article, we present a model of the planning-capable system based on a planning process arising from experimentally observed look-ahead dynamics in the hippocampus enabling a forward search of possibilities and an evaluation process in the nucleus accumbens. Based on evidence that opioid signaling can provide hedonic evalutation of an achieved outcome, we hypothesize that similar opioid-signaling processes evaluate the value of expected outcomes. This leads to a model of craving, based on the recognition of a path to a high-value outcome, and obsession, based on a value-induced limitation of the search process. This theory can explain why opioid antagonists reduce both hedonic responses and craving.

  3. Advances in parallel computer technology for desktop atmospheric dispersion models

    SciTech Connect

    Bian, X.; Ionescu-Niscov, S.; Fast, J.D.; Allwine, K.J.

    1996-12-31

    Desktop models are those models used by analysts with varied backgrounds, for performing, for example, air quality assessment and emergency response activities. These models must be robust, well documented, have minimal and well controlled user inputs, and have clear outputs. Existing coarse-grained parallel computers can provide significant increases in computation speed in desktop atmospheric dispersion modeling without considerable increases in hardware cost. This increased speed will allow for significant improvements to be made in the scientific foundations of these applied models, in the form of more advanced diffusion schemes and better representation of the wind and turbulence fields. This is especially attractive for emergency response applications where speed and accuracy are of utmost importance. This paper describes one particular application of coarse-grained parallel computer technology to a desktop complex terrain atmospheric dispersion modeling system. By comparing performance characteristics of the coarse-grained parallel version of the model with the single-processor version, we will demonstrate that applying coarse-grained parallel computer technology to desktop atmospheric dispersion modeling systems will allow us to address critical issues facing future requirements of this class of dispersion models.

  4. A 3-dimensional theory of free electron lasers

    SciTech Connect

    Webb, S.D.; Wang, G.; Litvinenko, V.N.

    2010-08-23

    In this paper, we present an analytical three-dimensional theory of free electron lasers. Under several assumptions, we arrive at an integral equation similar to earlier work carried out by Ching, Kim and Xie, but using a formulation better suited for the initial value problem of Coherent Electron Cooling. We use this model in later papers to obtain analytical results for gain guiding, as well as to develop a complete model of Coherent Electron Cooling.

  5. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  6. Biomechanical 3-Dimensional Finite Element Analysis of Obturator Protheses Retained with Zygomatic and Dental Implants in Maxillary Defects

    PubMed Central

    Akay, Canan; Yaluğ, Suat

    2015-01-01

    Background The objective of this study was to investigate the stress distribution in the bone around zygomatic and dental implants for 3 different implant-retained obturator prostheses designs in a Aramany class IV maxillary defect using 3-dimensional finite element analysis (FEA). Material\\Methods A 3-dimensional finite element model of an Aramany class IV defect was created. Three different implant-retained obturator prostheses were modeled: model 1 with 1 zygomatic implant and 1 dental implant, model 2 with 1 zygomatic implant and 2 dental implants, and model 3 with 2 zygomatic implants. Locator attachments were used as a superstructure. A 150-N load was applied 3 different ways. Qualitative analysis was based on the scale of maximum principal stress; values obtained through quantitative analysis are expressed in MPa. Results In all loading conditions, model 3 (when compared models 1 and 2) showed the lowest maximum principal stress value. Model 3 is the most appropirate reconstruction in Aramany class IV maxillary defects. Two zygomatic implants can reduce the stresses in model 3. The distribution of stresses on prostheses were more rational with the help of zygoma implants, which can distribute the stresses on each part of the maxilla. Conclusions Aramany class IV obturator prosthesis placement of 2 zygomatic implants in each side of the maxilla is more advantageous than placement of dental implants. In the non-defective side, increasing the number of dental implants is not as suitable as zygomatic implants. PMID:25714086

  7. Automated image analysis reveals the dynamic 3-dimensional organization of multi-ciliary arrays

    PubMed Central

    Galati, Domenico F.; Abuin, David S.; Tauber, Gabriel A.; Pham, Andrew T.; Pearson, Chad G.

    2016-01-01

    ABSTRACT Multi-ciliated cells (MCCs) use polarized fields of undulating cilia (ciliary array) to produce fluid flow that is essential for many biological processes. Cilia are positioned by microtubule scaffolds called basal bodies (BBs) that are arranged within a spatially complex 3-dimensional geometry (3D). Here, we develop a robust and automated computational image analysis routine to quantify 3D BB organization in the ciliate, Tetrahymena thermophila. Using this routine, we generate the first morphologically constrained 3D reconstructions of Tetrahymena cells and elucidate rules that govern the kinetics of MCC organization. We demonstrate the interplay between BB duplication and cell size expansion through the cell cycle. In mutant cells, we identify a potential BB surveillance mechanism that balances large gaps in BB spacing by increasing the frequency of closely spaced BBs in other regions of the cell. Finally, by taking advantage of a mutant predisposed to BB disorganization, we locate the spatial domains that are most prone to disorganization by environmental stimuli. Collectively, our analyses reveal the importance of quantitative image analysis to understand the principles that guide the 3D organization of MCCs. PMID:26700722

  8. 3-Dimensional analysis for class III malocclusion patients with facial asymmetry

    PubMed Central

    Ki, Eun-Jung; Cheon, Hae-Myung; Choi, Eun-Joo; Kwon, Kyung-Hwan

    2013-01-01

    Objectives The aim of this study is to investigate the correlation between 2-dimensional (2D) cephalometric measurement and 3-dimensional (3D) cone beam computed tomography (CBCT) measurement, and to evaluate the availability of 3D analysis for asymmetry patients. Materials and Methods A total of Twenty-seven patients were evaluated for facial asymmetry by photograph and cephalometric radiograph, and CBCT. The 14 measurements values were evaluated and those for 2D and 3D were compared. The patients were classified into two groups. Patients in group 1 were evaluated for symmetry in the middle 1/3 of the face and asymmetry in the lower 1/3 of the face, and those in group 2 for asymmetry of both the middle and lower 1/3 of the face. Results In group 1, significant differences were observed in nine values out of 14 values. Values included three from anteroposterior cephalometric radiograph measurement values (cant and both body height) and six from lateral cephalometric radiographs (both ramus length, both lateral ramal inclination, and both gonial angles). In group 2, comparison between 2D and 3D showed significant difference in 10 factors. Values included four from anteroposterior cephalometric radiograph measurement values (both maxillary height, both body height) and six from lateral cephalometric radiographs (both ramus length, both lateral ramal inclination, and both gonial angles). Conclusion Information from 2D analysis was inaccurate in several measurements. Therefore, in asymmetry patients, 3D analysis is useful in diagnosis of asymmetry. PMID:24471038

  9. Computational Neuroscience: Modeling the Systems Biology of Synaptic Plasticity

    PubMed Central

    Kotaleski, Jeanette Hellgren; Blackwell, Kim T.

    2016-01-01

    Preface Synaptic plasticity is a mechanism proposed to underlie learning and memory. The complexity of the interactions between ion channels, enzymes, and genes involved in synaptic plasticity impedes a deep understanding of this phenomenon. Computer modeling is an approach to investigate the information processing that is performed by signaling pathways underlying synaptic plasticity. In the past few years, new software developments that blend computational neuroscience techniques with systems biology techniques have allowed large-scale, quantitative modeling of synaptic plasticity in neurons. We highlight significant advancements produced by these modeling efforts and introduce promising approaches that utilize advancements in live cell imaging. PMID:20300102

  10. Computations of instability and turbulent mixing by Nikiforov's model

    NASA Astrophysics Data System (ADS)

    Razin, A. N.; Bolshakov, I. V.

    2014-08-01

    The results of modeling several laboratory experiments, including a large class of advanced experimental studies of turbulent flows, are presented. The results of the Meshkov's "cylindrical" and "planar" experiments on the confluence of two zones of turbulent mixing, the experiments of Poggi, Barre, and Uberoi have been carried out using the Nikiforov's model. The presented results attest that the Nikiforov's model qualitatively describes the considered class of flows if the mean gas-dynamic quantities are computed with a high accuracy in the technique, and the width of the front of the finite-difference shock wave does not depend on the size of the computational grid cell.

  11. A Perspective on Computational Human Performance Models as Design Tools

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  12. A propagation model of computer virus with nonlinear vaccination probability

    NASA Astrophysics Data System (ADS)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi

    2014-01-01

    This paper is intended to examine the effect of vaccination on the spread of computer viruses. For that purpose, a novel computer virus propagation model, which incorporates a nonlinear vaccination probability, is proposed. A qualitative analysis of this model reveals that, depending on the value of the basic reproduction number, either the virus-free equilibrium or the viral equilibrium is globally asymptotically stable. The results of simulation experiments not only demonstrate the validity of our model, but also show the effectiveness of nonlinear vaccination strategies. Through parameter analysis, some effective strategies for eradicating viruses are suggested.

  13. Computer Simulation of Small Group Decisions: Model Three.

    ERIC Educational Resources Information Center

    Hare, A.P.; Scheiblechner, Hartmann

    In a test of three computer models to simulate group decisions, data were used from 31 American and Austrian groups on a total of 307 trials. The task for each group was to predict a series of answers of an unknown subject on a value-orientation questionnaire, after being given a sample of his typical responses. The first model, used the mean of…

  14. Constructing Scientific Arguments Using Evidence from Dynamic Computational Climate Models

    ERIC Educational Resources Information Center

    Pallant, Amy; Lee, Hee-Sun

    2015-01-01

    Modeling and argumentation are two important scientific practices students need to develop throughout school years. In this paper, we investigated how middle and high school students (N = 512) construct a scientific argument based on evidence from computational models with which they simulated climate change. We designed scientific argumentation…

  15. Operation of the computer model for microenvironment solar exposure

    NASA Technical Reports Server (NTRS)

    Gillis, J. R.; Bourassa, R. J.; Gruenbaum, P. E.

    1995-01-01

    A computer model for microenvironmental solar exposure was developed to predict solar exposure to satellite surfaces which may shadow or reflect on one another. This document describes the technical features of the model as well as instructions for the installation and use of the program.

  16. Mental Models and Transfer of Learning in Computer Programming.

    ERIC Educational Resources Information Center

    Shih, Yu-Fen; Alessi, Stephen M.

    1994-01-01

    Reports on a study investigating the effects of conceptual models (computer graphics and animation) on learning and transfer of code evaluation and generation skills of novice programmers; changes in declarative knowledge during skill acquisition; relationships between the quality of subjects' mental models and performance in skill learning and…

  17. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  18. Industry-Wide Workshop on Computational Turbulence Modeling

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir (Compiler)

    1995-01-01

    This publication contains the presentations made at the Industry-Wide Workshop on Computational Turbulence Modeling which took place on October 6-7, 1994. The purpose of the workshop was to initiate the transfer of technology developed at Lewis Research Center to industry and to discuss the current status and the future needs of turbulence models in industrial CFD.

  19. Modeling and Computer Simulation of AN Insurance Policy:

    NASA Astrophysics Data System (ADS)

    Acharyya, Muktish; Acharyya, Ajanta Bhowal

    We have developed a model for a life-insurance policy. In this model, the net gain is calculated by computer simulation for a particular type of lifetime distribution function. We observed that the net gain becomes maximum for a particular value of upper age for last premium.

  20. Computer Modelling of Biological Molecules: Free Resources on the Internet.

    ERIC Educational Resources Information Center

    Millar, Neil

    1996-01-01

    Describes a three-dimensional computer modeling system for biological molecules which is suitable for sixth-form teaching. Consists of the modeling program "RasMol" together with structure files of proteins, DNA, and small biological molecules. Describes how the whole system can be downloaded from various sites on the Internet. (Author/JRH)

  1. Computational science: shifting the focus from tools to models

    PubMed Central

    Hinsen, Konrad

    2014-01-01

    Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728

  2. Computational modeling of brain tumors: discrete, continuum or hybrid?

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Deisboeck, Thomas S.

    In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.

  3. Computational fluid dynamics modeling for emergency preparedness & response

    SciTech Connect

    Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.

    1995-07-01

    Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.

  4. Revisions to the hydrogen gas generation computer model

    SciTech Connect

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  5. 3-dimensional wells and tunnels for finite element grids

    SciTech Connect

    Cherry, T.A.; Gable, C.W.; Trease, H.

    1996-12-31

    Modeling fluid, vapor, and air injection and extraction from wells poses a number of problems. The length scale of well bores is centimeters, the region of high pressure gradient may be tens of meters and the reservoir may be tens of kilometers. Furthermore, accurate representation of the path of a deviated well can be difficult. Incorporating the physics of injection and extraction can be made easier and more accurate with automated grid generation tools that incorporate wells as part of a background mesh that represents the reservoir. GEOMESH is a modeling tool developed for automating finite element grid generation. This tool maintains the geometric integrity of the geologic framework and produces optimal (Delaunay) tetrahedral grids. GEOMESH creates a 3D well as hexagonal segments formed along the path of the well. This well structure is tetrahedralized into a Delaunay mesh and then embedded into a background mesh. The well structure can be radially or vertically refined and each well layer is assigned a material property or can take on the material properties of the surrounding stratigraphy. The resulting embedded well can then be used by unstructured finite element models for gas and fluid flow in the vicinity of wells or tunnels. This 3D well representation allows the study of the free-surface of the well and surrounding stratigraphy. It reduces possible grid orientation effects, and allows better correlation between well sample data and the geologic model. The well grids also allow improved visualization for well and tunnel model analysis. 3D observation of the grids helps qualitative interpretation and can reveal features not apparent in fewer dimensions.

  6. 3-dimensional wells and tunnels for finite element grids

    SciTech Connect

    Cherry, T.A.; Gable, C.W.; Trease, H.

    1996-04-01

    Modeling fluid, vapor, and air injection and extraction from wells poses a number of problems. The length scale of well bores is centimeters, the region of high pressure gradient may be tens of meters and the reservoir may be tens of kilometers. Furthermore, accurate representation of the path of a deviated well can be difficult. Incorporating the physics of injection and extraction can be made easier and more accurate with automated grid generation tools that incorporate wells as part of a background mesh that represents the reservoir. GEOMESH is a modeling tool developed for automating finite element grid generation. This tool maintains the geometric integrity of the geologic framework and produces optimal (Delaunay) tetrahedral grids. GEOMESH creates a 3D well as hexagonal segments formed along the path of the well. This well structure is tetrahedralized into a Delaunay mesh and then embedded into a background mesh. The well structure can be radially or vertically refined and each well layer is assigned a material property or can take on the material properties of the surrounding stratigraphy. The resulting embedded well can then be used by unstructured finite element models for gas and fluid flow in the vicinity of wells or tunnels. This 3D well representation allows the study of the free- surface of the well and surrounding stratigraphy. It reduces possible grid orientation effects, and allows better correlation between well sample data and the geologic model. The well grids also allow improved visualization for well and tunnel model analysis. 3D observation of the grids helps qualitative interpretation and can reveal features not apparent in fewer dimensions.

  7. Integrated Multiscale Modeling of Molecular Computing Devices

    SciTech Connect

    Weinan E

    2012-03-29

    The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.

  8. Implementation of 2D computational models for NDE on GPU

    NASA Astrophysics Data System (ADS)

    Bardel, Charles; Lei, Naiguang; Udpa, Lalita

    2012-05-01

    This paper presents an attempt to implement a simulation model for electromagnetic NDE on a GPU. A sample electromagnetic NDE problem is examined and the solution is computed on both CPU and GPU. Diffierent matrix storage formats and matrix-vector computational strategies will be investigated. Analysis of the storage requirements for the matrix on the GPU is tabulated and a full-timing breakdown of the process is presented and discussed.

  9. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  10. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    SciTech Connect

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  11. Computational fluid dynamics modeling for emergency preparedness and response

    SciTech Connect

    Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.

    1995-02-01

    Computational fluid dynamics (CFD) has (CFD) has played an increasing in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-couple equations whose solution requires a significant level of computational power. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows. We are now embarking on a development program to incorporate prognostic models to generate, in real-time, the meteorological fields for the dispersion models. In contrast to diagnostic models, prognostic models are physically-based and are capable of incorporating many physical processes to treat highly complex flow scenarios.

  12. Practical Use of Computationally Frugal Model Analysis Methods

    SciTech Connect

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugal methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts

  13. Practical Use of Computationally Frugal Model Analysis Methods

    DOE PAGES

    Hill, Mary C.; Kavetski, Dmitri; Clark, Martyn; Ye, Ming; Arabi, Mazdak; Lu, Dan; Foglia, Laura; Mehl, Steffen

    2015-03-21

    Computationally frugal methods of model analysis can provide substantial benefits when developing models of groundwater and other environmental systems. Model analysis includes ways to evaluate model adequacy and to perform sensitivity and uncertainty analysis. Frugal methods typically require 10s of parallelizable model runs; their convenience allows for other uses of the computational effort. We suggest that model analysis be posed as a set of questions used to organize methods that range from frugal to expensive (requiring 10,000 model runs or more). This encourages focus on method utility, even when methods have starkly different theoretical backgrounds. We note that many frugalmore » methods are more useful when unrealistic process-model nonlinearities are reduced. Inexpensive diagnostics are identified for determining when frugal methods are advantageous. Examples from the literature are used to demonstrate local methods and the diagnostics. We suggest that the greater use of computationally frugal model analysis methods would allow questions such as those posed in this work to be addressed more routinely, allowing the environmental sciences community to obtain greater scientific insight from the many ongoing and future modeling efforts« less

  14. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  15. The GOURD model of human-computer interaction

    SciTech Connect

    Goldbogen, G.

    1996-12-31

    This paper presents a model, the GOURD model, that can be used to measure the goodness of {open_quotes}interactivity{close_quotes} of an interface design and qualifies how to improve the design. The GOURD model describes what happens to the computer and to the human during a human-computer interaction. Since the interaction is generally repeated, the traversal of the model repeatedly is similar to a loop programming structure. Because the model measures interaction over part or all of the application, it can also be used as a classifier of the part or the whole application. But primarily, the model is used as a design guide and a predictor of effectiveness.

  16. 3-dimensional simulation of the tangential YORP effect

    NASA Astrophysics Data System (ADS)

    Golubov, O.; Scheeres, D. J.; Krugly, Y. N.

    2013-12-01

    YORP effect is a torque created by recoil forces of the light reflected or re-emitted by the surface of an asteroid. This torque has been demonstrated to be a major factor of evolution of kilometer-sized asteroids, largely responsible for their distribution over rotation rates and obliquities. YORP used to be considered predominantly in the model of locally flat surface. It resulted in the recoil force normal to the surface, and the overall torque being non-zero only due to slight non-symmetries of the asteroid's shape. But recently it has been shown that presence of decimeter-sized stones on the surface of an asteroid can substantially change this picture (Golubov & Krugly 2012, ApJL 752: L11). Under certain conditions, the western sides of stones appear to be on average slightly warmer than their eastern sides, thus experiencing a stronger recoil force and dragging the surface of the asteroid eastward. This force parallel to the overall surface of the asteroid is called the tangential YORP, or TYORP. It operates in concert with the normal YORP force, or NYORP, which acts normally to the overall surface. Even though the TYORP force is much smaller than the NYORP force, it has a bigger lever arm with respect to the rotation axis of the asteroid, and its torques tend to add up for opposite points on the asteroid's surface rather than to subtract, therefore the effects of TYORP and NYORP on the rotation rate of the asteroid can be comparable, and the standard treatment of the effect considering NYORP only is insufficient. In the talk we are going to review the results from Golubov & Krugly (2012), which give a rough estimate of the effect in a 1-dimensional model, approximating the stones by high long walls. Then we shall go beyond this model and present simulations of the effect for a surface covered with spherical stones. Our model incorporates 3-dimentional heat conductivity in stones and ray tracing of incoming and emitted light. We shall present the strength of

  17. Evaluation of a Computational Model of Situational Awareness

    NASA Technical Reports Server (NTRS)

    Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)

    2000-01-01

    Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.

  18. Virtual Cell: computational tools for modeling in cell biology

    PubMed Central

    Resasco, Diana C.; Gao, Fei; Morgan, Frank; Novak, Igor L.; Schaff, James C.; Slepchenko, Boris M.

    2011-01-01

    The Virtual Cell (VCell) is a general computational framework for modeling physico-chemical and electrophysiological processes in living cells. Developed by the National Resource for Cell Analysis and Modeling at the University of Connecticut Health Center, it provides automated tools for simulating a wide range of cellular phenomena in space and time, both deterministically and stochastically. These computational tools allow one to couple electrophysiology and reaction kinetics with transport mechanisms, such as diffusion and directed transport, and map them onto spatial domains of various shapes, including irregular three-dimensional geometries derived from experimental images. In this article, we review new robust computational tools recently deployed in VCell for treating spatially resolved models. PMID:22139996

  19. Computer modeling and computational toxicology in new chemical and pharmaceutical product development.

    PubMed

    Hall, A H

    1998-12-28

    A theoretical basis for use of computer modeling and bioinformatics resources including the internet in decisions about whether to attempt synthesis and toxicology testing of new chemical or pharmaceutical products is described. Steps in the process include: (1) identification of a potentially efficacious chemical or pharmaceutical product; (2) structure-activity relationship (SAR) modeling; (3) synthesis methods and cost screening; (4) market screening for potential revenues; (5) regulatory impacts screening; (6) toxicology modeling and screening; (7) decision making about whether to attempt synthesis and testing. Some such computer modeling and screening processes are already in use. Others may reasonably be expected to be adopted in the near future. More development of structure-activity and structure-toxicity databases and therapeutic and toxicity molecular endpoints computerized libraries remains to be done. The internet is a rapidly developing source of information, but there are major problems with time-effectiveness, quality control, 'junk information' (misinformation), and deliberate 'disinformation'. PMID:10022324

  20. Computational Motion Phantoms and Statistical Models of Respiratory Motion

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian

    Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.