Creating Digital Elevation Model Using a Mobile Device
NASA Astrophysics Data System (ADS)
Durmaz, A. İ.
2017-11-01
DEM (Digital Elevation Models) is the best way to interpret topography on the ground. In recent years, lidar technology allows to create more accurate elevation models. However, the problem is this technology is not common all over the world. Also if Lidar data are not provided by government agencies freely, people have to pay lots of money to reach these point clouds. In this article, we will discuss how we can create digital elevation model from less accurate mobile devices' GPS data. Moreover, we will evaluate these data on the same mobile device which we collected data to reduce cost of this modeling.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
Prediction of morbidity and mortality in patients with type 2 diabetes.
Wells, Brian J; Roth, Rachel; Nowacki, Amy S; Arrigain, Susana; Yu, Changhong; Rosenkrans, Wayne A; Kattan, Michael W
2013-01-01
Introduction. The objective of this study was to create a tool that accurately predicts the risk of morbidity and mortality in patients with type 2 diabetes according to an oral hypoglycemic agent. Materials and Methods. The model was based on a cohort of 33,067 patients with type 2 diabetes who were prescribed a single oral hypoglycemic agent at the Cleveland Clinic between 1998 and 2006. Competing risk regression models were created for coronary heart disease (CHD), heart failure, and stroke, while a Cox regression model was created for mortality. Propensity scores were used to account for possible treatment bias. A prediction tool was created and internally validated using tenfold cross-validation. The results were compared to a Framingham model and a model based on the United Kingdom Prospective Diabetes Study (UKPDS) for CHD and stroke, respectively. Results and Discussion. Median follow-up for the mortality outcome was 769 days. The numbers of patients experiencing events were as follows: CHD (3062), heart failure (1408), stroke (1451), and mortality (3661). The prediction tools demonstrated the following concordance indices (c-statistics) for the specific outcomes: CHD (0.730), heart failure (0.753), stroke (0.688), and mortality (0.719). The prediction tool was superior to the Framingham model at predicting CHD and was at least as accurate as the UKPDS model at predicting stroke. Conclusions. We created an accurate tool for predicting the risk of stroke, coronary heart disease, heart failure, and death in patients with type 2 diabetes. The calculator is available online at http://rcalc.ccf.org under the heading "Type 2 Diabetes" and entitled, "Predicting 5-Year Morbidity and Mortality." This may be a valuable tool to aid the clinician's choice of an oral hypoglycemic, to better inform patients, and to motivate dialogue between physician and patient.
Adaptive System Modeling for Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Thomas, Justin
2011-01-01
This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).
Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter
NASA Technical Reports Server (NTRS)
Belknap, Shannon; Zhang, Michael
2013-01-01
The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.
Southern Ocean Bottom Water Characteristics in CMIP5 Models
NASA Astrophysics Data System (ADS)
Heuzé, Céline; Heywood, Karen; Stevens, David; Ridley, Jeff
2013-04-01
The depiction of Southern Ocean deep water properties and formation processes in climate models is an indicator of their capability to simulate future climate, heat and carbon uptake, and sea level rise. Southern Ocean potential temperature and density averaged over 1986-2005 from fifteen CMIP5 climate models are compared with an observed climatology, focusing on bottom water properties. The mean bottom properties are reasonably accurate for half of the models, but the other half may not yet have approached an equilibrium state. Eleven models create dense water on the Antarctic shelf, but it does not spill off and propagate northwards, alternatively mixing rapidly with less dense water. Instead most models create deep water by open ocean deep convection. Models with large deep convection areas are those with a strong seasonal cycle in sea ice. The most accurate bottom properties occur in models hosting deep convection in the Weddell and Ross gyres.
Waran, V; Pancharatnam, Devaraj; Thambinayagam, Hari Chandran; Raman, Rajagopal; Rathinam, Alwin Kumar; Balakrishnan, Yuwaraj Kumar; Tung, Tan Su; Rahman, Z A
2014-01-01
Navigation in neurosurgery has expanded rapidly; however, suitable models to train end users to use the myriad software and hardware that come with these systems are lacking. Utilizing three-dimensional (3D) industrial rapid prototyping processes, we have been able to create models using actual computed tomography (CT) data from patients with pathology and use these models to simulate a variety of commonly performed neurosurgical procedures with navigation systems. To assess the possibility of utilizing models created from CT scan dataset obtained from patients with cranial pathology to simulate common neurosurgical procedures using navigation systems. Three patients with pathology were selected (hydrocephalus, right frontal cortical lesion, and midline clival meningioma). CT scan data following an image-guidance surgery protocol in DIACOM format and a Rapid Prototyping Machine were taken to create the necessary printed model with the corresponding pathology embedded. The ability in registration, planning, and navigation of two navigation systems using a variety of software and hardware provided by these platforms was assessed. We were able to register all models accurately using both navigation systems and perform the necessary simulations as planned. Models with pathology utilizing 3D rapid prototyping techniques accurately reflect data of actual patients and can be used in the simulation of neurosurgical operations using navigation systems. Georg Thieme Verlag KG Stuttgart · New York.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
Identification of flexible structures by frequency-domain observability range context
NASA Astrophysics Data System (ADS)
Hopkins, M. A.
2013-04-01
The well known frequency-domain observability range space extraction (FORSE) algorithm provides a powerful multivariable system-identification tool with inherent flexibility, to create state-space models from frequency-response data (FRD). This paper presents a method of using FORSE to create "context models" of a lightly damped system, from which models of individual resonant modes can be extracted. Further, it shows how to combine the extracted models of many individual modes into one large state-space model. Using this method, the author has created very high-order state-space models that accurately match measured FRD over very broad bandwidths, i.e., resonant peaks spread across five orders-of-magnitude of frequency bandwidth.
User's Guide for ENSAERO_FE Parallel Finite Element Solver
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.; Guruswamy, Guru P.
1999-01-01
A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.
2018-01-01
ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Peppytides: Interactive Models of Polypeptide Chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuckermann, Ron; Chakraborty, Promita; Derisi, Joe
2014-01-21
Peppytides are scaled, 3D-printed models of polypeptide chains that can be folded into accurate protein structures. Designed and created by Berkeley Lab Researcher, Promita Chakraborty, and Berkeley Lab Senior Scientist, Dr. Ron Zuckermann, Peppytides are accurate physical models of polypeptide chains that anyone can interact with and fold intro various protein structures - proving to be a great educational tool, resulting in a deeper understanding of these fascinating structures and how they function. Build your own Peppytide model and learn about how nature's machines fold into their intricate architectures!
Peppytides: Interactive Models of Polypeptide Chains
Zuckermann, Ron; Chakraborty, Promita; Derisi, Joe
2018-06-08
Peppytides are scaled, 3D-printed models of polypeptide chains that can be folded into accurate protein structures. Designed and created by Berkeley Lab Researcher, Promita Chakraborty, and Berkeley Lab Senior Scientist, Dr. Ron Zuckermann, Peppytides are accurate physical models of polypeptide chains that anyone can interact with and fold intro various protein structures - proving to be a great educational tool, resulting in a deeper understanding of these fascinating structures and how they function. Build your own Peppytide model and learn about how nature's machines fold into their intricate architectures!
Francis, P; Eastwood, K W; Bodani, V; Looi, T; Drake, J M
2018-05-07
This work explores the feasibility of creating and accurately controlling an instrument for robotic surgery with a 2 mm diameter and a three degree-of-freedom (DoF) wrist which is compatible with the da Vinci platform. The instrument's wrist is composed of a two DoF bending notched-nitinol tube pattern, for which a kinematic model has been developed. A base mechanism for controlling the wrist is designed for integration with the da Vinci Research Kit. A basic teleoperation task is successfully performed using two of the miniature instruments. The performance and accuracy of the instrument suggest that creating and accurately controlling a 2 mm diameter instrument is feasible and the design and modelling proposed in this work provide a basis for future miniature instrument development.
Comparison Between Surf and Multi-Shock Forest Fire High Explosive Burn Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenfield, Nicholas Alexander
PAGOSA1 has several different burn models used to model high explosive detonation. Two of these, Multi-Shock Forest Fire and Surf, are capable of modeling shock initiation. Accurately calculating shock initiation of a high explosive is important because it is a mechanism for detonation in many accident scenarios (i.e. fragment impact). Comparing the models to pop-plot data give confidence that the models are accurately calculating detonation or lack thereof. To compare the performance of these models, pop-plots2 were created from simulations where one two cm block of PBX 9502 collides with another block of PBX 9502.
Rhode Island Model Evaluation & Support System: Teacher. Edition III
ERIC Educational Resources Information Center
Rhode Island Department of Education, 2015
2015-01-01
Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…
Accuracy of open-source software segmentation and paper-based printed three-dimensional models.
Szymor, Piotr; Kozakiewicz, Marcin; Olszewski, Raphael
2016-02-01
In this study, we aimed to verify the accuracy of models created with the help of open-source Slicer 3.6.3 software (Surgical Planning Lab, Harvard Medical School, Harvard University, Boston, MA, USA) and the Mcor Matrix 300 paper-based 3D printer. Our study focused on the accuracy of recreating the walls of the right orbit of a cadaveric skull. Cone beam computed tomography (CBCT) of the skull was performed (0.25-mm pixel size, 0.5-mm slice thickness). Acquired DICOM data were imported into Slicer 3.6.3 software, where segmentation was performed. A virtual model was created and saved as an .STL file and imported into Netfabb Studio professional 4.9.5 software. Three different virtual models were created by cutting the original file along three different planes (coronal, sagittal, and axial). All models were printed with a Selective Deposition Lamination Technology Matrix 300 3D printer using 80 gsm A4 paper. The models were printed so that their cutting plane was parallel to the paper sheets creating the model. Each model (coronal, sagittal, and axial) consisted of three separate parts (∼200 sheets of paper each) that were glued together to form a final model. The skull and created models were scanned with a three-dimensional (3D) optical scanner (Breuckmann smart SCAN) and were saved as .STL files. Comparisons of the orbital walls of the skull, the virtual model, and each of the three paper models were carried out with GOM Inspect 7.5SR1 software. Deviations measured between the models analysed were presented in the form of a colour-labelled map and covered with an evenly distributed network of points automatically generated by the software. An average of 804.43 ± 19.39 points for each measurement was created. Differences measured in each point were exported as a .csv file. The results were statistically analysed using Statistica 10, with statistical significance set at p < 0.05. The average number of points created on models for each measurement was 804.43 ± 19.39; however, deviation in some of the generated points could not be calculated, and those points were excluded from further calculations. From 94% to 99% of the measured absolute deviations were <1 mm. The mean absolute deviation between the skull and virtual model was 0.15 ± 0.11 mm, between the virtual and printed models was 0.15 ± 0.12 mm, and between the skull and printed models was 0.24 ± 0.21 mm. Using the optical scanner and specialized inspection software for measurements of accuracy of the created parts is recommended, as it allows one not only to measure 2-dimensional distances between anatomical points but also to perform more clinically suitable comparisons of whole surfaces. However, it requires specialized software and a very accurate scanner in order to be useful. Threshold-based, manually corrected segmentation of orbital walls performed with 3D Slicer software is accurate enough to be used for creating a virtual model of the orbit. The accuracy of the paper-based Mcor Matrix 300 3D printer is comparable to those of other commonly used 3-dimensional printers and allows one to create precise anatomical models for clinical use. The method of dividing the model into smaller parts and sticking them together seems to be quite accurate, although we recommend it only for creating small, solid models with as few parts as possible to minimize shift associated with gluing. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
2015-10-30
accurately follow the development of the Black Hawk helicopters , a single main rotor model in NDARC that accurately represented the UH-60A is required. NDARC...Weight changes were based on results from Nixon’s paper, which focused on modeling the structure of a composite rotor blade and using optimization to...conclude that improved composite design to further reduce weight needs to be achieved. An additionally interesting effect is how the rotor technology
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
Maneuver Estimation Model for Geostationary Orbit Determination
2006-06-01
create a more robust model which would reduce the amount of data needed to make accurate maneuver estimations. The Clohessy - Wiltshire equations were...Applications to Geostationary Satellites...........................................7 2.3.2 Clohessy - Wiltshire Equations...15 3.1.1 Application of Clohessy - Wiltshire Equations ................................15 3.1.2
Scalable nanohelices for predictive studies and enhanced 3D visualization.
Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P
2014-11-12
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
Integrating satellite imagery with simulation modeling to improve burn severity mapping
Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon
2014-01-01
Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...
TU-AB-BRC-05: Creation of a Monte Carlo TrueBeam Model by Reproducing Varian Phase Space Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Grady, K; Davis, S; Seuntjens, J
Purpose: To create a Varian TrueBeam 6 MV FFF Monte Carlo model using BEAMnrc/EGSnrc that accurately reproduces the Varian representative dataset, followed by tuning the model’s source parameters to accurately reproduce in-house measurements. Methods: A BEAMnrc TrueBeam model for 6 MV FFF has been created by modifying a validated 6 MV Varian CL21EX model. Geometric dimensions and materials were adjusted in a trial and error approach to match the fluence and spectra of TrueBeam phase spaces output by the Varian VirtuaLinac. Once the model’s phase space matched Varian’s counterpart using the default source parameters, it was validated to match 10more » × 10 cm{sup 2} Varian representative data obtained with the IBA CC13. The source parameters were then tuned to match in-house 5 × 5 cm{sup 2} PTW microDiamond measurements. All dose to water simulations included detector models to include the effects of volume averaging and the non-water equivalence of the chamber materials, allowing for more accurate source parameter selection. Results: The Varian phase space spectra and fluence were matched with excellent agreement. The in-house model’s PDD agreement with CC13 TrueBeam representative data was within 0.9% local percent difference beyond the first 3 mm. Profile agreement at 10 cm depth was within 0.9% local percent difference and 1.3 mm distance-to-agreement in the central axis and penumbra regions, respectively. Once the source parameters were tuned, PDD agreement with microDiamond measurements was within 0.9% local percent difference beyond 2 mm. The microDiamond profile agreement at 10 cm depth was within 0.6% local percent difference and 0.4 mm distance-to-agreement in the central axis and penumbra regions, respectively. Conclusion: An accurate in-house Monte Carlo model of the Varian TrueBeam was achieved independently of the Varian phase space solution and was tuned to in-house measurements. KO acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
Photometric Lunar Surface Reconstruction
NASA Technical Reports Server (NTRS)
Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.
2013-01-01
Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data
NASA Astrophysics Data System (ADS)
Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia
Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.
Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.
Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry
2012-12-01
Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Subarachnoid hemorrhage admissions retrospectively identified using a prediction model
McIntyre, Lauralyn; Fergusson, Dean; Turgeon, Alexis; dos Santos, Marlise P.; Lum, Cheemun; Chassé, Michaël; Sinclair, John; Forster, Alan; van Walraven, Carl
2016-01-01
Objective: To create an accurate prediction model using variables collected in widely available health administrative data records to identify hospitalizations for primary subarachnoid hemorrhage (SAH). Methods: A previously established complete cohort of consecutive primary SAH patients was combined with a random sample of control hospitalizations. Chi-square recursive partitioning was used to derive and internally validate a model to predict the probability that a patient had primary SAH (due to aneurysm or arteriovenous malformation) using health administrative data. Results: A total of 10,322 hospitalizations with 631 having primary SAH (6.1%) were included in the study (5,122 derivation, 5,200 validation). In the validation patients, our recursive partitioning algorithm had a sensitivity of 96.5% (95% confidence interval [CI] 93.9–98.0), a specificity of 99.8% (95% CI 99.6–99.9), and a positive likelihood ratio of 483 (95% CI 254–879). In this population, patients meeting criteria for the algorithm had a probability of 45% of truly having primary SAH. Conclusions: Routinely collected health administrative data can be used to accurately identify hospitalized patients with a high probability of having a primary SAH. This algorithm may allow, upon validation, an easy and accurate method to create validated cohorts of primary SAH from either ruptured aneurysm or arteriovenous malformation. PMID:27629096
ERIC Educational Resources Information Center
Kennedy, Eileen; Laurillard, Diana; Horan, Bernard; Charlton, Patricia
2015-01-01
This article reports on a design-based research project to create a modelling tool to analyse the costs and learning benefits involved in different modes of study. The Course Resource Appraisal Model (CRAM) provides accurate cost-benefit information so that institutions are able to make more meaningful decisions about which kind of…
Feedback control by online learning an inverse model.
Waegeman, Tim; Wyffels, Francis; Schrauwen, Francis
2012-10-01
A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made.
A Brief Review of Elasticity and Viscoelasticity
2010-05-27
through electromagnetic or acoustic means. Creating a model that accurately describes these Rayleigh waves is key to modeling and understanding the...technology to be feasible, a mathematical model that describes the propagation of the acoustic wave from the stenosis to the chest wall will be necessary...viscoelastic model is simpler to use than poroelastic models but yields similar results for a wide range of soils and dynamic 30 loadings. In addition
Bahrami, Babak; Shahrbaf, Shirin; Mirzakouchaki, Behnam; Ghalichi, Farzan; Ashtiani, Mohammed; Martin, Nicolas
2014-04-01
To investigate, by means of FE analysis, the effect of surface roughness treatments on the distribution of stresses at the bone-implant interface in immediately loaded mandibular implants. An accurate, high resolution, digital replica model of bone structure (cortical and trabecular components) supporting an implant was created using CT scan data and image processing software (Mimics 13.1; Materialize, Leuven, Belgium). An anatomically accurate 3D model of a mandibular-implant complex was created using a professional 3D-CAD modeller (SolidWorks, DassaultSystèmes Solid Works Corp; 2011). Finite element models were created with one of the four roughness treatments on the implant fixture surface. Of these, three were surface treated to create a uniform coating determined by the coefficient of friction (μ); these were either (1) plasma sprayed or porous-beaded (μ=1.0), (2) sandblasted (μ=0.68) or (3) polished (μ=0.4). The fourth implant had a novel two-part surface roughness consisting of a coronal polished component (μ=0.4) interfacing with the cortical bone, and a body plasma treated surface component (μ=1) interfacing with the trabecular bone. Finite element stress analysis was carried out under vertical and lateral forces. This investigation showed that the type of surface treatment on the implant fixture affects the stress at the bone-implant interface of an immediately loaded implant complex. Von Mises stress data showed that the two-part surface treatment created the better stress distribution at the implant-bone interface. The results from this FE computational analysis suggest that the proposed two-part surface treatment for IL implants creates lower stresses than single uniform treatments at the bone-implant interface, which might decrease peri-implant bone loss. Future investigations should focus on mechanical and clinical validation of these FE results. Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Original data preprocessor for Femap/Nastran
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra
2016-12-01
Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.
Using Maximum Entropy to Find Patterns in Genomes
NASA Astrophysics Data System (ADS)
Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.
Creating speech-synchronized animation.
King, Scott A; Parent, Richard E
2005-01-01
We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.
Campbell, J Q; Petrella, A J
2016-09-06
Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Electrostatic potential map modelling with COSY Infinity
NASA Astrophysics Data System (ADS)
Maloney, J. A.; Baartman, R.; Planche, T.; Saminathan, S.
2016-06-01
COSY Infinity (Makino and Berz, 2005) is a differential-algebra based simulation code which allows accurate calculation of transfer maps to arbitrary order. COSY's existing internal procedures were modified to allow electrostatic elements to be specified using an array of field potential data from the midplane. Additionally, a new procedure was created allowing electrostatic elements and their fringe fields to be specified by an analytic function. This allows greater flexibility in accurately modelling electrostatic elements and their fringe fields. Applied examples of these new procedures are presented including the modelling of a shunted electrostatic multipole designed with OPERA, a spherical electrostatic bender, and the effects of different shaped apertures in an electrostatic beam line.
Evaluating the accuracy of wear formulae for acetabular cup liners.
Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng
2010-02-01
This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.
IMPLEMENTATION OF GREEN ROOF SUSTAINABILITY IN ARID CONDITIONS
We successfully designed and fabricated accurately scaled prototypes of a green roof and a conventional white roof and began testing in simulated conditions of 115-70°F with relative humidity of 13%. The design parameters were based on analytical models created through ver...
Southern Ocean bottom water characteristics in CMIP5 models
NASA Astrophysics Data System (ADS)
Heuzé, CéLine; Heywood, Karen J.; Stevens, David P.; Ridley, Jeff K.
2013-04-01
Southern Ocean deep water properties and formation processes in climate models are indicative of their capability to simulate future climate, heat and carbon uptake, and sea level rise. Southern Ocean temperature and density averaged over 1986-2005 from 15 CMIP5 (Coupled Model Intercomparison Project Phase 5) climate models are compared with an observed climatology, focusing on bottom water. Bottom properties are reasonably accurate for half the models. Ten models create dense water on the Antarctic shelf, but it mixes with lighter water and is not exported as bottom water as in reality. Instead, most models create deep water by open ocean deep convection, a process occurring rarely in reality. Models with extensive deep convection are those with strong seasonality in sea ice. Optimum bottom properties occur in models with deep convection in the Weddell and Ross Gyres. Bottom Water formation processes are poorly represented in ocean models and are a key challenge for improving climate predictions.
Evaluation of a Computational Model of Situational Awareness
NASA Technical Reports Server (NTRS)
Burdick, Mark D.; Shively, R. Jay; Rutkewski, Michael (Technical Monitor)
2000-01-01
Although the use of the psychological construct of situational awareness (SA) assists researchers in creating a flight environment that is safer and more predictable, its true potential remains untapped until a valid means of predicting SA a priori becomes available. Previous work proposed a computational model of SA (CSA) that sought to Fill that void. The current line of research is aimed at validating that model. The results show that the model accurately predicted SA in a piloted simulation.
NASA Astrophysics Data System (ADS)
Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.
2017-06-01
Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.
Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh
This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.
Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich
2013-12-01
This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Wave Propagation and Inversion in Shallow Water and Poro-elastic Sediment
1997-09-30
water and high freq. acoustics LONG-TERM GOALS To create codes accurately model wave propagation and scattering in shallow water, and to quantify...is undergoing testing for the acoustic stratified Green’s function. We have adapted code generated by J. Schuster in Geophysics for the FDTD model ...inversions and modelling , and have repercussions in environmental imaging [5], acoustic imaging [1,4,5,6,7] and early breast cancer diagnosis
Approaches to 3D printing teeth from X-ray microtomography.
Cresswell-Boyes, A J; Barber, A H; Mills, D; Tatla, A; Davis, G R
2018-06-28
Artificial teeth have several advantages in preclinical training. The aim of this study is to three-dimensionally (3D) print accurate artificial teeth using scans from X-ray microtomography (XMT). Extracted and artificial teeth were imaged at 90 kV and 40 kV, respectively, to create detailed high contrast scans. The dataset was visualised to produce internal and external meshes subsequently exported to 3D modelling software for modification before finally sending to a slicing program for printing. After appropriate parameter setting, the printer deposited material in specific locations layer by layer, to create a 3D physical model. Scans were manipulated to ensure a clean model was imported into the slicing software, where layer height replicated the high spatial resolution that was observed in the XMT scans. The model was then printed in two different materials (polylactic acid and thermoplastic elastomer). A multimaterial print was created to show the different physical characteristics between enamel and dentine. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
On the Development of Parameterized Linear Analytical Longitudinal Airship Models
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.
2008-01-01
In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.
THE IMPACT OF ACCURATE EXTINCTION MEASUREMENTS FOR X-RAY SPECTRAL MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Randall K.; Valencic, Lynne A.; Corrales, Lia, E-mail: lynne.a.valencic@nasa.gov
Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources, and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model asmore » a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure that the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.« less
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
Prediction of 1-octanol solubilities using data from the Open Notebook Science Challenge.
Buonaiuto, Michael A; Lang, Andrew S I D
2015-12-01
1-Octanol solubility is important in a variety of applications involving pharmacology and environmental chemistry. Current models are linear in nature and often require foreknowledge of either melting point or aqueous solubility. Here we extend the range of applicability of 1-octanol solubility models by creating a random forest model that can predict 1-octanol solubilities directly from structure. We created a random forest model using CDK descriptors that has an out-of-bag (OOB) R 2 value of 0.66 and an OOB mean squared error of 0.34. The model has been deployed for general use as a Shiny application. The 1-octanol solubility model provides reasonably accurate predictions of the 1-octanol solubility of organic solutes directly from structure. The model was developed under Open Notebook Science conditions which makes it open, reproducible, and as useful as possible.Graphical abstract.
Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik
2017-09-01
This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.
Ploch, Caitlin C; Mansi, Chris S S A; Jayamohan, Jayaratnam; Kuhl, Ellen
2016-06-01
Three-dimensional (3D) printing holds promise for a wide variety of biomedical applications, from surgical planning, practicing, and teaching to creating implantable devices. The growth of this cheap and easy additive manufacturing technology in orthopedic, plastic, and vascular surgery has been explosive; however, its potential in the field of neurosurgery remains underexplored. A major limitation is that current technologies are unable to directly print ultrasoft materials like human brain tissue. In this technical note, the authors present a new technology to create deformable, personalized models of the human brain. The method combines 3D printing, molding, and casting to create a physiologically, anatomically, and tactilely realistic model based on magnetic resonance images. Created from soft gelatin, the model is easy to produce, cost-efficient, durable, and orders of magnitude softer than conventionally printed 3D models. The personalized brain model cost $50, and its fabrication took 24 hours. In mechanical tests, the model stiffness (E = 25.29 ± 2.68 kPa) was 5 orders of magnitude softer than common 3D printed materials, and less than an order of magnitude stiffer than mammalian brain tissue (E = 2.64 ± 0.40 kPa). In a multicenter surgical survey, model size (100.00%), visual appearance (83.33%), and surgical anatomy (81.25%) were perceived as very realistic. The model was perceived as very useful for patient illustration (85.00%), teaching (94.44%), learning (100.00%), surgical training (95.00%), and preoperative planning (95.00%). With minor refinements, personalized, deformable brain models created via 3D printing will improve surgical training and preoperative planning with the ultimate goal to provide accurate, customized, high-precision treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
Reaction Wheel Disturbance Model Extraction Software - RWDMES
NASA Technical Reports Server (NTRS)
Blaurock, Carl
2009-01-01
The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.
Modeling and control design of a wind tunnel model support
NASA Technical Reports Server (NTRS)
Howe, David A.
1990-01-01
The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.
Using Computational Cognitive Modeling to Diagnose Possible Sources of Aviation Error
NASA Technical Reports Server (NTRS)
Byrne, M. D.; Kirlik, Alex
2003-01-01
We present a computational model of a closed-loop, pilot-aircraft-visual scene-taxiway system created to shed light on possible sources of taxi error. Creating the cognitive aspects of the model using ACT-R required us to conduct studies with subject matter experts to identify experiential adaptations pilots bring to taxiing. Five decision strategies were found, ranging from cognitively-intensive but precise, to fast, frugal but robust. We provide evidence for the model by comparing its behavior to a NASA Ames Research Center simulation of Chicago O'Hare surface operations. Decision horizons were highly variable; the model selected the most accurate strategy given time available. We found a signature in the simulation data of the use of globally robust heuristics to cope with short decision horizons as revealed by errors occurring most frequently at atypical taxiway geometries or clearance routes. These data provided empirical support for the model.
Numerical modeling of consolidation processes in hydraulically deposited soils
NASA Astrophysics Data System (ADS)
Brink, Nicholas Robert
Hydraulically deposited soils are encountered in many common engineering applications including mine tailing and geotextile tube fills, though the consolidation process for such soils is highly nonlinear and requires the use of advanced numerical techniques to provide accurate predictions. Several commercially available finite element codes poses the ability to model soil consolidation, and it was the goal of this research to assess the ability of two of these codes, ABAQUS and PLAXIS, to model the large-strain, two-dimensional consolidation processes which occur in hydraulically deposited soils. A series of one- and two-dimensionally drained rectangular models were first created to assess the limitations of ABAQUS and PLAXIS when modeling consolidation of highly compressible soils. Then, geotextile tube and TSF models were created to represent actual scenarios which might be encountered in engineering practice. Several limitations were discovered, including the existence of a minimum preconsolidation stress below which numerical solutions become unstable.
Rajagopal, Vijay; Bass, Gregory; Ghosh, Shouryadipta; Hunt, Hilary; Walker, Cameron; Hanssen, Eric; Crampin, Edmund; Soeller, Christian
2018-04-18
With the advent of three-dimensional (3D) imaging technologies such as electron tomography, serial-block-face scanning electron microscopy and confocal microscopy, the scientific community has unprecedented access to large datasets at sub-micrometer resolution that characterize the architectural remodeling that accompanies changes in cardiomyocyte function in health and disease. However, these datasets have been under-utilized for investigating the role of cellular architecture remodeling in cardiomyocyte function. The purpose of this protocol is to outline how to create an accurate finite element model of a cardiomyocyte using high resolution electron microscopy and confocal microscopy images. A detailed and accurate model of cellular architecture has significant potential to provide new insights into cardiomyocyte biology, more than experiments alone can garner. The power of this method lies in its ability to computationally fuse information from two disparate imaging modalities of cardiomyocyte ultrastructure to develop one unified and detailed model of the cardiomyocyte. This protocol outlines steps to integrate electron tomography and confocal microscopy images of adult male Wistar (name for a specific breed of albino rat) rat cardiomyocytes to develop a half-sarcomere finite element model of the cardiomyocyte. The procedure generates a 3D finite element model that contains an accurate, high-resolution depiction (on the order of ~35 nm) of the distribution of mitochondria, myofibrils and ryanodine receptor clusters that release the necessary calcium for cardiomyocyte contraction from the sarcoplasmic reticular network (SR) into the myofibril and cytosolic compartment. The model generated here as an illustration does not incorporate details of the transverse-tubule architecture or the sarcoplasmic reticular network and is therefore a minimal model of the cardiomyocyte. Nevertheless, the model can already be applied in simulation-based investigations into the role of cell structure in calcium signaling and mitochondrial bioenergetics, which is illustrated and discussed using two case studies that are presented following the detailed protocol.
In silico method for modelling metabolism and gene product expression at genome scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem
2012-07-03
Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome andmore » transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.« less
Modeling Conformal Growth in Photonic Crystals and Comparing to Experiment
NASA Astrophysics Data System (ADS)
Brzezinski, Andrew; Chen, Ying-Chieh; Wiltzius, Pierre; Braun, Paul
2008-03-01
Conformal growth, e.g. atomic layer deposition (ALD), of materials such as silicon and TiO2 on three dimensional (3D) templates is important for making photonic crystals. However, reliable calculations of optical properties as a function of the conformal growth, such as the optical band structure, are hampered by difficultly in accurately assessing a deposited material's spatial distribution. A widely used approximation ignores ``pinch off'' of precursor gas and assumes complete template infilling. Another approximation results in non-uniform growth velocity by employing iso-intensity surfaces of the 3D interference pattern used to create the template. We have developed an accurate model of conformal growth in arbitrary 3D periodic structures, allowing for arbitrary surface orientation. Results are compared with the above approximations and with experimentally fabricated photonic crystals. We use an SU8 polymer template created by 4-beam interference lithography, onto which various amounts of TiO2 are grown by ALD. Characterization is performed by analysis of cross-sectional scanning electron micrographs and by solid angle resolved optical spectroscopy.
Influence of Elevation Data Source on 2D Hydraulic Modelling
NASA Astrophysics Data System (ADS)
Bakuła, Krzysztof; StĘpnik, Mateusz; Kurczyński, Zdzisław
2016-08-01
The aim of this paper is to analyse the influence of the source of various elevation data on hydraulic modelling in open channels. In the research, digital terrain models from different datasets were evaluated and used in two-dimensional hydraulic models. The following aerial and satellite elevation data were used to create the representation of terrain-digital terrain model: airborne laser scanning, image matching, elevation data collected in the LPIS, EuroDEM, and ASTER GDEM. From the results of five 2D hydrodynamic models with different input elevation data, the maximum depth and flow velocity of water were derived and compared with the results of the most accurate ALS data. For such an analysis a statistical evaluation and differences between hydraulic modelling results were prepared. The presented research proved the importance of the quality of elevation data in hydraulic modelling and showed that only ALS and photogrammetric data can be the most reliable elevation data source in accurate 2D hydraulic modelling.
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
Behavioral modeling of VCSELs for high-speed optical interconnects
NASA Astrophysics Data System (ADS)
Szczerba, Krzysztof; Kocot, Chris
2018-02-01
Transition from on-off keying to 4-level pulse amplitude modulation (PAM) in VCSEL based optical interconnects allows for an increase of data rates, at the cost of 4.8 dB sensitivity penalty. The resulting strained link budget creates a need for accurate VCSEL models for driver integrated circuit (IC) design and system level simulations. Rate equation based equivalent circuit models are convenient for the IC design, but system level analysis requires computationally efficient closed form behavioral models based Volterra series and neural networks. In this paper we present and compare these models.
Romanolo, K. F.; Gorski, L.; Wang, S.; Lauzon, C. R.
2015-01-01
The use of Fourier Transform-Infrared Spectroscopy (FT-IR) in conjunction with Artificial Neural Network software NeuroDeveloper™ was examined for the rapid identification and classification of Listeria species and serotyping of Listeria monocytogenes. A spectral library was created for 245 strains of Listeria spp. to give a biochemical fingerprint from which identification of unknown samples were made. This technology was able to accurately distinguish the Listeria species with 99.03% accuracy. Eleven serotypes of Listeria monocytogenes including 1/2a, 1/2b, and 4b were identified with 96.58% accuracy. In addition, motile and non-motile forms of Listeria were used to create a more robust model for identification. FT-IR coupled with NeuroDeveloper™ appear to be a more accurate and economic choice for rapid identification of pathogenic Listeria spp. than current methods. PMID:26600423
In Situ Casting and Imaging of the Rat Airway Tree for Accurate 3D Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, Rick E.; Colby, Sean M.; Kabilan, Senthil
The use of anatomically accurate, animal-specific airway geometries is important for understanding and modeling the physiology of the respiratory system. One approach for acquiring detailed airway architecture is to create a bronchial cast of the conducting airways. However, typical casting procedures either do not faithfully preserve the in vivo branching angles, or produce rigid casts that when removed for imaging are fragile and thus easily damaged. We address these problems by creating an in situ bronchial cast of the conducting airways in rats that can be subsequently imaged in situ using 3D micro-CT imaging. We also demonstrate that deformations inmore » airway branch angles resulting from the casting procedure are small, and that these angle deformations can be reversed through an interactive adjustment of the segmented cast geometry. Animal work was approved by the Institutional Animal Care and Use Committee of Pacific Northwest National Laboratory.« less
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
NASA Astrophysics Data System (ADS)
Hoffman, Kenneth J.
1995-10-01
Few information systems create a standardized clinical patient record in which there are discrete and concise observations of patient problems and their resolution. Clinical notes usually are narratives which don't support an aggregate and systematic outcome analysis. Many programs collect information on diagnosis and coded procedures but are not focused on patient problems. Integrated definition (IDEF) methodology has been accepted by the Department of Defense as part of the Corporate Information Management Initiative and serves as the foundation that establishes a need for automation. We used IDEF modeling to describe present and idealized patient care activities. A logical IDEF data model was created to support those activities. The modeling process allows for accurate cost estimates based upon performed activities, efficient collection of relevant information, and outputs which allow real- time assessments of process and outcomes. This model forms the foundation for a prototype automated clinical information system (ACIS).
Howley, Donna; Howley, Peter; Oxenham, Marc F
2018-06-01
Stature and a further 8 anthropometric dimensions were recorded from the arms and hands of a sample of 96 staff and students from the Australian National University and The University of Newcastle, Australia. These dimensions were used to create simple and multiple logistic regression models for sex estimation and simple and multiple linear regression equations for stature estimation of a contemporary Australian population. Overall sex classification accuracies using the models created were comparable to similar studies. The stature estimation models achieved standard errors of estimates (SEE) which were comparable to and in many cases lower than those achieved in similar research. Generic, non sex-specific models achieved similar SEEs and R 2 values to the sex-specific models indicating stature may be accurately estimated when sex is unknown. Copyright © 2018 Elsevier B.V. All rights reserved.
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Investigation into the influence of build parameters on failure of 3D printed parts
NASA Astrophysics Data System (ADS)
Fornasini, Giacomo
Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.
A comparative approach to computer aided design model of a dog femur.
Turamanlar, O; Verim, O; Karabulut, A
2016-01-01
Computer assisted technologies offer new opportunities in medical imaging and rapid prototyping in biomechanical engineering. Three dimensional (3D) modelling of soft tissues and bones are becoming more important. The accuracy of the analysis in modelling processes depends on the outline of the tissues derived from medical images. The aim of this study is the evaluation of the accuracy of 3D models of a dog femur derived from computed tomography data by using point cloud method and boundary line method on several modelling software. Solidworks, Rapidform and 3DSMax software were used to create 3D models and outcomes were evaluated statistically. The most accurate 3D prototype of the dog femur was created with stereolithography method using rapid prototype device. Furthermore, the linearity of the volumes of models was investigated between software and the constructed models. The difference between the software and real models manifests the sensitivity of the software and the devices used in this manner.
Test techniques for model development of repetitive service energy storage capacitors
NASA Astrophysics Data System (ADS)
Thompson, M. C.; Mauldin, G. H.
1984-03-01
The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.
Development of a detector model for generation of synthetic radiographs of cargo containers
NASA Astrophysics Data System (ADS)
White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.
2008-05-01
Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.
Smalheiser, Neil R; McDonagh, Marian S; Yu, Clement; Adams, Clive E; Davis, John M; Yu, Philip S
2015-01-01
Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi. PMID:25656516
Numerical Model Simulation of Atmosphere above A.C. Airport
NASA Astrophysics Data System (ADS)
Lutes, Tiffany; Trout, Joseph
2014-03-01
In this research project, the Weather Research & Forecasting (WRF) model from the National Center for Atmospheric Research (NCAR) is used to investigate past and present weather conditions. The Atlantic City Airport area in southern New Jersey is the area of interest. Long-term hourly data is analyzed and model simulations are created. By inputting high resolution surface data, a more accurate picture of the effects of different weather conditions will be portrayed. Currently, the impact of gridded model runs is being tested, and the impact of surface characteristics is being investigated.
Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko
2016-10-01
The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.
NASA Astrophysics Data System (ADS)
Artigas, Francisco; Bosits, Stephanie; Kojak, Saleh; Elefante, Dominador; Pechmann, Ildiko
2016-10-01
The accurate forecast from Hurricane Sandy sea surge was the result of integrating the most sophisticated environmental monitoring technology available. This stands in contrast to the limited information and technology that exists at the community level to translate these forecasts into flood hazard levels on the ground at scales that are meaningful to property owners. Appropriately scaled maps with high levels of certainty can be effectively used to convey exposure to flood hazard at the community level. This paper explores the most basic analysis and data required to generate a relatively accurate flood hazard map to convey inundation risk due to sea surge. A Boolean overlay analysis of four input layers: elevation and slope derived from LiDAR data and distances from streams and catch basins derived from aerial photography and field reconnaissance were used to create a spatial model that explained 55 % of the extent and depth of the flood during Hurricane Sandy. When a ponding layer was added to the previous model to account for depressions that would fill and spill over to nearby areas, the new model explained almost 70 % of the extent and depth of the flood. The study concludes that fairly accurate maps can be created with readily available information and that it is possible to infer a great deal about risk of inundation at the property level, from flood hazard maps. The study goes on to conclude that local communities are encouraged to prepare for disasters, but in reality because of the existing Federal emergency management framework there is very little incentive to do so.
Demands for quick and accurate life cycle assessments create a need for methods to rapidly generate reliable life cycle inventories (LCI). Data mining is a suitable tool for this purpose, especially given the large amount of available governmental data. These data are typically a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, H.; Eki, Y.; Kaji, A.
1993-12-01
An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.
Surrogate modeling of deformable joint contact using artificial neural networks.
Eskinazi, Ilan; Fregly, Benjamin J
2015-09-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks
Eskinazi, Ilan; Fregly, Benjamin J.
2016-01-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
Simulation of crash tests for high impact levels of a new bridge safety barrier
NASA Astrophysics Data System (ADS)
Drozda, Jiří; Rotter, Tomáš
2017-09-01
The purpose is to show the opportunity of a non-linear dynamic impact simulation and to explain the possibility of using finite element method (FEM) for developing new designs of safety barriers. The main challenge is to determine the means to create and validate the finite element (FE) model. The results of accurate impact simulations can help to reduce necessary costs for developing of a new safety barrier. The introductory part deals with the creation of the FE model, which includes the newly-designed safety barrier and focuses on the application of an experimental modal analysis (EMA). The FE model has been created in ANSYS Workbench and is formed from shell and solid elements. The experimental modal analysis, which was performed on a real pattern, was employed for measuring the modal frequencies and shapes. After performing the EMA, the FE mesh was calibrated after comparing the measured modal frequencies with the calculated ones. The last part describes the process of the numerical non-linear dynamic impact simulation in LS-DYNA. This simulation was validated after comparing the measured ASI index with the calculated ones. The aim of the study is to improve professional public knowledge about dynamic non-linear impact simulations. This should ideally lead to safer, more accurate and profitable designs.
Getting in touch--3D printing in forensic imaging.
Ebert, Lars Chr; Thali, Michael J; Ross, Steffen
2011-09-10
With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
2001-08-01
Utilization of green fluorescent protein for the identification of metastasis in an in vivo breast cancer model system. In Preparation. REPRINTS OF ALL...phenotype. Utilizing the SUM-159PT cell line stably transfected with pEGFP-Ci (enhanced green fluorescent protein ) we have been able to successfully...accurately detected. To develop a model with enhanced resolution of micrometastases we created a stable cell line expressing green fluorescent protein
Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions
NASA Technical Reports Server (NTRS)
Balmes, Etienne
1993-01-01
An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.
Endoscopic skull base training using 3D printed models with pre-existing pathology.
Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes
2015-03-01
Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.
Verilog-A Device Models for Cryogenic Temperature Operation of Bulk Silicon CMOS Devices
NASA Technical Reports Server (NTRS)
Akturk, Akin; Potbhare, Siddharth; Goldsman, Neil; Holloway, Michael
2012-01-01
Verilog-A based cryogenic bulk CMOS (complementary metal oxide semiconductor) compact models are built for state-of-the-art silicon CMOS processes. These models accurately predict device operation at cryogenic temperatures down to 4 K. The models are compatible with commercial circuit simulators. The models extend the standard BSIM4 [Berkeley Short-channel IGFET (insulated-gate field-effect transistor ) Model] type compact models by re-parameterizing existing equations, as well as adding new equations that capture the physics of device operation at cryogenic temperatures. These models will allow circuit designers to create optimized, reliable, and robust circuits operating at cryogenic temperatures.
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2014-01-01
Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874
Frequency response of synthetic vocal fold models with linear and nonlinear material properties.
Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon
2012-10-01
The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.
Ozone (O3), a secondary pollutant, is created in part by emissions from anthropogenic and biogenic sources. It is necessary for local air quality agencies to accurately forecast ozone concentrations to warn the public of unhealthy air and to encourage people to volunta...
Of Needles and Haystacks: Building an Accurate Statewide Dropout Early Warning System in Wisconsin
ERIC Educational Resources Information Center
Knowles, Jared E.
2015-01-01
The state of Wisconsin has one of the highest four year graduation rates in the nation, but deep disparities among student subgroups remain. To address this the state has created the Wisconsin Dropout Early Warning System (DEWS), a predictive model of student dropout risk for students in grades six through nine. The Wisconsin DEWS is in use…
Finite element analyses of two dimensional, anisotropic heat transfer in wood
John F. Hunt; Hongmei Gu
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Inputting basic orthogonal properties of the wood material alone are not sufficient for accurate modeling because wood is a combination of porous fiber cells that are aligned and mis-...
3D printing the pterygopalatine fossa: a negative space model of a complex structure.
Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis
2018-02-01
The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, C; Xing, L; Yu, S
Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth cameramore » (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.« less
History and Evolution of the Johnson Criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjaardema, Tracy A.; Smith, Collin S.; Birch, Gabriel Carisle
The Johnson Criteria metric calculates probability of detection of an object imaged by an optical system, and was created in 1958 by John Johnson. As understanding of target detection has improved, detection models have evolved to better model additional factors such as weather, scene content, and object placement. The initial Johnson Criteria, while sufficient for technology and understanding at the time, does not accurately reflect current research into target acquisition and technology. Even though current research shows a dependence on human factors, there appears to be a lack of testing and modeling of human variability.
NASA Astrophysics Data System (ADS)
Perama, Yasmin Mohd Idris; Siong, Khoo Kok
2018-04-01
A mathematical model comprising 8 compartments were designed to describe the kinetic dissolution of arsenic (As) from water leach purification (WLP) waste samples ingested into the gastrointestinal system. A totally reengineered software system named Simulation, Analysis and Modelling II (SAAM II) was employed to aid in the experimental design and data analysis. As a powerful tool that creates, simulate and analyze data accurately and rapidly, SAAM II computationally creates a system of ordinary differential equations according to the specified compartmental model structure and simulates the solutions based upon the parameter and model inputs provided. The experimental design of in vitro DIN approach was applied to create an artificial gastric and gastrointestinal fluids. These synthetic fluids assay were produced to determine the concentrations of As ingested into the gastrointestinal tract. The model outputs were created based upon the experimental inputs and the recommended fractional transfer rates parameter. As a result, the measured and predicted As concentrations in gastric fluids were much similar against the time of study. In contrast, the concentrations of As in the gastrointestinal fluids were only similar during the first hour and eventually started decreasing until the fifth hours of study between the measured and predicted values. This is due to the loss of As through the fractional transfer rates of q2 compartment to corresponding compartments of q3 and q5 which are involved with excretion and distribution to the whole body, respectively. The model outputs obtained after best fit to the data were influenced significantly by the fractional transfer rates between each compartment. Therefore, a series of compartmental model created with the association of fractional transfer rates parameter with the aid of SAAM II provides better estimation that simulate the kinetic behavior of As ingested into the gastrointestinal system.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
Farooqi, Kanwal M; Lengua, Carlos Gonzalez; Weinberg, Alan D; Nielsen, James C; Sanz, Javier
2016-08-01
The method of cardiac magnetic resonance (CMR) three-dimensional (3D) image acquisition and post-processing which should be used to create optimal virtual models for 3D printing has not been studied systematically. Patients (n = 19) who had undergone CMR including both 3D balanced steady-state free precession (bSSFP) imaging and contrast-enhanced magnetic resonance angiography (MRA) were retrospectively identified. Post-processing for the creation of virtual 3D models involved using both myocardial (MS) and blood pool (BP) segmentation, resulting in four groups: Group 1-bSSFP/MS, Group 2-bSSFP/BP, Group 3-MRA/MS and Group 4-MRA/BP. The models created were assessed by two raters for overall quality (1-poor; 2-good; 3-excellent) and ability to identify predefined vessels (1-5: superior vena cava, inferior vena cava, main pulmonary artery, ascending aorta and at least one pulmonary vein). A total of 76 virtual models were created from 19 patient CMR datasets. The mean overall quality scores for Raters 1/2 were 1.63 ± 0.50/1.26 ± 0.45 for Group 1, 2.12 ± 0.50/2.26 ± 0.73 for Group 2, 1.74 ± 0.56/1.53 ± 0.61 for Group 3 and 2.26 ± 0.65/2.68 ± 0.48 for Group 4. The numbers of identified vessels for Raters 1/2 were 4.11 ± 1.32/4.05 ± 1.31 for Group 1, 4.90 ± 0.46/4.95 ± 0.23 for Group 2, 4.32 ± 1.00/4.47 ± 0.84 for Group 3 and 4.74 ± 0.56/4.63 ± 0.49 for Group 4. Models created using BP segmentation (Groups 2 and 4) received significantly higher ratings than those created using MS for both overall quality and number of vessels visualized (p < 0.05), regardless of the acquisition technique. There were no significant differences between Groups 1 and 3. The ratings for Raters 1 and 2 had good correlation for overall quality (ICC = 0.63) and excellent correlation for the total number of vessels visualized (ICC = 0.77). The intra-rater reliability was good for Rater A (ICC = 0.65). Three models were successfully printed on desktop 3D printers with good quality and accurate representation of the virtual 3D models. We recommend using BP segmentation with either MRA or bSSFP source datasets to create virtual 3D models for 3D printing. Desktop 3D printers can offer good quality printed models with accurate representation of anatomic detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rickey, Daniel; Leylek, Ahmet; Dubey, Arbind
Purpose: Treatment of skin cancers of the face using orthovoltage radiotherapy often requires lead shielding. However, creating a lead shield can be difficult because the face has complex and intricate contours. The traditional process involved creating a plaster mould of the patient’s face can be difficult for patients. Our goal was to develop an improved process by using an optical scanner and 3D printer technology. Methods: The oncologist defined the treatment field by drawing on each patient’s skin. Three-dimensional images were acquired using a consumer-grade optical scanner. A 3D model of each patient’s face was processed with mesh editing softwaremore » before being printed on a 3D printer. Using a hammer, a 3 mm thick layer of lead was formed to closely fit the contours of the model. A hole was then cut out to define the field. Results: The lead shields created were remarkably accurate and fit the contours of the patients. The hole defining the field exposed only a minimally sized site to be exposed to radiation, while the rest of the face was protected. It was easy to obtain perfect symmetry for the definition of parallel opposed beams. Conclusion: We are routinely using this technique to build lead shielding that wraps around the patient as an alternative to cut-outs. We also use it for treatment of the tip of the nose using a parallel opposed pair beams with a wax nose block. We found this technique allows more accurate delineation of the cut-out and a more reproducible set-up.« less
A pilot study on the use of geometrically accurate face models to replicate ex vivo N95 mask fit.
Golshahi, Laleh; Telidetzki, Karla; King, Ben; Shaw, Diana; Finlay, Warren H
2013-01-01
To test the feasibility of replicating a face mask seal in vitro, we created 5 geometrically accurate reconstructions of the head and neck of an adult human subject using different materials. Three breathing patterns were simulated with each replica and an attached N95 mask. Quantitative fit testing on the subject and the replicas showed that none of the 5 isotropic materials used allowed duplication of the ex vivo mask seal for the specific mask-face combination studied. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
An Equation for Moist Entropy in a Precipitating and Icy Atmosphere
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Simpson, Joanne; Zeng, Xiping
2003-01-01
Moist entropy is nearly conserved in adiabatic motion. It is redistributed rather than created by moist convection. Thus moist entropy and its equation, as a healthy direction, can be used to construct analytical and numerical models for the interaction between tropical convective clouds and large-scale circulations. Hence, an accurate equation of moist entropy is needed for the analysis and modeling of atmospheric convective clouds. On the basis of the consistency between the energy and the entropy equations, a complete equation of moist entropy is derived from the energy equation. The equation expresses explicitly the internal and external sources of moist entropy, including those in relation to the microphysics of clouds and precipitation. In addition, an accurate formula for the surface flux of moist entropy from the underlying surface into the air above is derived. Because moist entropy deals "easily" with the transition among three water phases, it will be used as a prognostic variable in the next generation of cloud-resolving models (e. g. a global cloud-resolving model) for low computational noise. Its equation that is derived in this paper is accurate and complete, providing a theoretical basis for using moist entropy as a prognostic variable in the long-term modeling of clouds and large-scale circulations.
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
NASA Technical Reports Server (NTRS)
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
Conceptual model of sedimentation in the Sacramento-San Joaquin River Delta
Schoellhamer, David H.; Wright, Scott A.; Drexler, Judith Z.
2012-01-01
Sedimentation in the Sacramento–San Joaquin River Delta builds the Delta landscape, creates benthic and pelagic habitat, and transports sediment-associated contaminants. Here we present a conceptual model of sedimentation that includes submodels for river supply from the watershed to the Delta, regional transport within the Delta and seaward exchange, and local sedimentation in open water and marsh habitats. The model demonstrates feedback loops that affect the Delta ecosystem. Submerged and emergent marsh vegetation act as ecosystem engineers that can create a positive feedback loop by decreasing suspended sediment, increasing water column light, which in turn enables more vegetation. Sea-level rise in open water is partially countered by a negative feedback loop that increases deposition if there is a net decrease in hydrodynamic energy. Manipulation of regional sediment transport is probably the most feasible method to control suspended sediment and thus turbidity. The conceptual model is used to identify information gaps that need to be filled to develop an accurate sediment transport model.
Inverse Force Determination on a Small Scale Launch Vehicle Model Using a Dynamic Balance
NASA Technical Reports Server (NTRS)
Ngo, Christina L.; Powell, Jessica M.; Ross, James C.
2017-01-01
A launch vehicle can experience large unsteady aerodynamic forces in the transonic regime that, while usually only lasting for tens of seconds during launch, could be devastating if structural components and electronic hardware are not designed to account for them. These aerodynamic loads are difficult to experimentally measure and even harder to computationally estimate. The current method for estimating buffet loads is through the use of a few hundred unsteady pressure transducers and wind tunnel test. Even with a large number of point measurements, the computed integrated load is not an accurate enough representation of the total load caused by buffeting. This paper discusses an attempt at using a dynamic balance to experimentally determine buffet loads on a generic scale hammer head launch vehicle model tested at NASA Ames Research Center's 11' x 11' transonic wind tunnel. To use a dynamic balance, the structural characteristics of the model needed to be identified so that the natural modal response could be and removed from the aerodynamic forces. A finite element model was created on a simplified version of the model to evaluate the natural modes of the balance flexures, assist in model design, and to compare to experimental data. Several modal tests were conducted on the model in two different configurations to check for non-linearity, and to estimate the dynamic characteristics of the model. The experimental results were used in an inverse force determination technique with a psuedo inverse frequency response function. Due to the non linearity, the model not being axisymmetric, and inconsistent data between the two shake tests from different mounting configuration, it was difficult to create a frequency response matrix that satisfied all input and output conditions for wind tunnel configuration to accurately predict unsteady aerodynamic loads.
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Boben, Joseph; Kostov, Nikolay; Boswell, Cody; Buscher, Austin
2013-12-01
To increase aerodynamic performance, the geometric porosity of a ringsail spacecraft parachute canopy is sometimes increased, beyond the "rings" and "sails" with hundreds of "ring gaps" and "sail slits." This creates extra computational challenges for fluid-structure interaction (FSI) modeling of clusters of such parachutes, beyond those created by the lightness of the canopy structure, geometric complexities of hundreds of gaps and slits, and the contact between the parachutes of the cluster. In FSI computation of parachutes with such "modified geometric porosity," the flow through the "windows" created by the removal of the panels and the wider gaps created by the removal of the sails cannot be accurately modeled with the Homogenized Modeling of Geometric Porosity (HMGP), which was introduced to deal with the hundreds of gaps and slits. The flow needs to be actually resolved. All these computational challenges need to be addressed simultaneously in FSI modeling of clusters of spacecraft parachutes with modified geometric porosity. The core numerical technology is the Stabilized Space-Time FSI (SSTFSI) technique, and the contact between the parachutes is handled with the Surface-Edge-Node Contact Tracking (SENCT) technique. In the computations reported here, in addition to the SSTFSI and SENCT techniques and HMGP, we use the special techniques we have developed for removing the numerical spinning component of the parachute motion and for restoring the mesh integrity without a remesh. We present results for 2- and 3-parachute clusters with two different payload models.
Discrete Element Modeling of Impact Damage on Thermal Barrier Coatings
NASA Astrophysics Data System (ADS)
Minor, Peter Michel
Natural gas turbines have become an increasingly important part of the energy landscape in the United States, currently accounting for 19% of all electricity production. Efforts to increase thermal efficiency in gas turbines has led to the adoption of highly porous ceramic thermal barrier coatings (TBCs), which are susceptible to erosion and foreign object impact damage. Despite significant investment to improve the design of TBCs, few numerical tools exist which are capable of both accurately capturing the specific failure mechanisms inherent to TBCs and iterating design parameters without the requirement for coupled experimental data. To overcome these limitations, a discrete element model (DEM) was created to simulate the microstructure of a TBC using a large-scale assembly of bonded particles. Acting as Lagrangian nodes, the particles can be combined to create accurate representations of TBC geometry and porosity. The inclusion of collision-driven particle dynamics and bonds derived from displacement-dependent force functions endow the microstructure model with the ability to deform and reproduce damage in a highly physical manner. Typical TBC damage mechanisms such as compaction, fracture and spallation occur automatically, without having to tune the model based on experimental observation. Therefore, the first order performance of novel TBC designs and materials can be determined numerically, greatly decreasing the cost of development. To verify the utility and effectiveness of the proposed damage model framework, a nanoindentation materials test simulation was developed to serve as a test case. By varying model parameters, such as the porosity of the TBC and maximum applied indenter force, nanoindentation data from more than one hundred distinct permutations was gathered and analyzed. This data was used to calculate the elastic modulus (E) and hardness (H) of the simulated microstructure, which could then be compared to known experimental material property values. A good correlation was found between the predicted properties calculated by the model and those found through experimental nanoindentation tests. Furthermore, conforming to the benefits of DEM, the model was able to accurately recreate the same material damage characteristics observed in literature, such as the onset of inelastic deformation from fracture.
MicroRNA based Pan-Cancer Diagnosis and Treatment Recommendation.
Cheerla, Nikhil; Gevaert, Olivier
2017-01-13
The current state-of-the-art in cancer diagnosis and treatment is not ideal; diagnostic tests are accurate but invasive, and treatments are "one-size fits-all" instead of being personalized. Recently, miRNA's have garnered significant attention as cancer biomarkers, owing to their ease of access (circulating miRNA in the blood) and stability. There have been many studies showing the effectiveness of miRNA data in diagnosing specific cancer types, but few studies explore the role of miRNA in predicting treatment outcome. Here we go a step further, using tissue miRNA and clinical data across 21 cancers from the 'The Cancer Genome Atlas' (TCGA) database. We use machine learning techniques to create an accurate pan-cancer diagnosis system, and a prediction model for treatment outcomes. Finally, using these models, we create a web-based tool that diagnoses cancer and recommends the best treatment options. We achieved 97.2% accuracy for classification using a support vector machine classifier with radial basis. The accuracies improved to 99.9-100% when climbing up the embryonic tree and classifying cancers at different stages. We define the accuracy as the ratio of the total number of instances correctly classified to the total instances. The classifier also performed well, achieving greater than 80% sensitivity for many cancer types on independent validation datasets. Many miRNAs selected by our feature selection algorithm had strong previous associations to various cancers and tumor progression. Then, using miRNA, clinical and treatment data and encoding it in a machine-learning readable format, we built a prognosis predictor model to predict the outcome of treatment with 85% accuracy. We used this model to create a tool that recommends personalized treatment regimens. Both the diagnosis and prognosis model, incorporating semi-supervised learning techniques to improve their accuracies with repeated use, were uploaded online for easy access. Our research is a step towards the final goal of diagnosing cancer and predicting treatment recommendations using non-invasive blood tests.
High-Accurate, Physics-Based Wake Simulation Techniques
2015-01-27
to accepting the use of computational fluid dynamics models to supplement some of the research. The scientists Lewellen and Lewellen [13] in 1996...resolved in today’s climate es- pecially concerning CFD and experimental. Multiple programs have been established such as the Aircraft Vortex Spacing ...step the entire matrix is solved at once creating inconsistencies when applied to the physics of a fluid mechanics problem where information changes
NASA Astrophysics Data System (ADS)
Sommer, Kelsey; Izzo, Rick L.; Shepard, Lauren; Podgorsak, Alexander R.; Rudin, Stephen; Siddiqui, Adnan H.; Wilson, Michael F.; Angel, Erin; Said, Zaid; Springer, Michael; Ionita, Ciprian N.
2017-03-01
3D printing has been used to create complex arterial phantoms to advance device testing and physiological condition evaluation. Stereolithographic (STL) files of patient-specific cardiovascular anatomy are acquired to build cardiac vasculature through advanced mesh-manipulation techniques. Management of distal branches in the arterial tree is important to make such phantoms practicable. We investigated methods to manage the distal arterial flow resistance and pressure thus creating physiologically and geometrically accurate phantoms that can be used for simulations of image-guided interventional procedures with new devices. Patient specific CT data were imported into a Vital Imaging workstation, segmented, and exported as STL files. Using a mesh-manipulation program (Meshmixer) we created flow models of the coronary tree. Distal arteries were connected to a compliance chamber. The phantom was then printed using a Stratasys Connex3 multimaterial printer: the vessel in TangoPlus and the fluid flow simulation chamber in Vero. The model was connected to a programmable pump and pressure sensors measured flow characteristics through the phantoms. Physiological flow simulations for patient-specific vasculature were done for six cardiac models (three different vasculatures comparing two new designs). For the coronary phantom we obtained physiologically relevant waves which oscillated between 80 and 120 mmHg and a flow rate of 125 ml/min, within the literature reported values. The pressure wave was similar with those acquired in human patients. Thus we demonstrated that 3D printed phantoms can be used not only to reproduce the correct patient anatomy for device testing in image-guided interventions, but also for physiological simulations. This has great potential to advance treatment assessment and diagnosis.
Quantitative phenomenological model of the BOLD contrast mechanism
NASA Astrophysics Data System (ADS)
Dickson, John D.; Ash, Tom W. J.; Williams, Guy B.; Sukstanskii, Alexander L.; Ansorge, Richard E.; Yablonskiy, Dmitriy A.
2011-09-01
Different theoretical models of the BOLD contrast mechanism are used for many applications including BOLD quantification (qBOLD) and vessel size imaging, both in health and disease. Each model simplifies the system under consideration, making approximations about the structure of the blood vessel network and diffusion of water molecules through inhomogeneities in the magnetic field created by deoxyhemoglobin-containing blood vessels. In this study, Monte-Carlo methods are used to simulate the BOLD MR signal generated by diffusing water molecules in the presence of long, cylindrical blood vessels. Using these simulations we introduce a new, phenomenological model that is far more accurate over a range of blood oxygenation levels and blood vessel radii than existing models. This model could be used to extract physiological parameters of the blood vessel network from experimental data in BOLD-based experiments. We use our model to establish ranges of validity for the existing analytical models of Yablonskiy and Haacke, Kiselev and Posse, Sukstanskii and Yablonskiy (extended to the case of arbitrary time in the spin echo sequence) and Bauer et al. (extended to the case of randomly oriented cylinders). Although these models are shown to be accurate in the limits of diffusion under which they were derived, none of them is accurate for the whole physiological range of blood vessels radii and blood oxygenation levels. We also show the extent of systematic errors that are introduced due to the approximations of these models when used for BOLD signal quantification.
NASA Astrophysics Data System (ADS)
Pai, H.; Tyler, S.
2017-12-01
Small, unmanned aerial systems (sUAS) are quickly becoming a cost-effective and easily deployable tool for high spatial resolution environmental sensing. Land surface studies from sUAS imagery have largely focused on accurate topographic mapping, quantifying geomorphologic changes, and classification/identification of vegetation, sediment, and water quality tracers. In this work, we explore a further application of sUAS-derived topographic mapping to a two-dimensional (2-d), depth-averaged river hydraulic model (Flow and Sediment Transport with Morphological Evolution of Channels, FaSTMECH) along a short, meandering reach of East River, Colorado. On August 8, 2016, we flew a sUAS as part of the Center for Transformative Environmental Monitoring Programs with a consumer-grade visible camera and created a digital elevation map ( 1.5 cm resolution; 5 cm accuracy; 500 m long river corridor) with Agisoft Photoscan software. With the elevation map, we created a longitudinal water surface elevation (WSE) profile by manually delineating the bank-water interface and river bathymetry by applying refraction corrections for more accurate water depth estimates, an area of ongoing research for shallow and clear river systems. We tested both uncorrected and refraction-corrected bathymetries with the steady-state, 2-d model, applying sensitivities for dissipation parameters (bed roughness and eddy characteristics). Model performance was judged from the WSE data and measured stream velocities. While the models converged, performance and insights from model output could be improved with better bed roughness characterization and additional water depth cross-validation for refraction corrections. Overall, this work shows the applicability of sUAS-derived products to a multidimensional river model, where bathymetric data of high resolution and accuracy are key model input requirements.
Karschner, Erin L; Schwope, David M; Schwilke, Eugene W; Goodwin, Robert S; Kelly, Deanna L; Gorelick, David A; Huestis, Marilyn A
2012-10-01
Determining time since last cannabis/Δ9-tetrahydrocannabinol (THC) exposure is important in clinical, workplace, and forensic settings. Mathematical models calculating time of last exposure from whole blood concentrations typically employ a theoretical 0.5 whole blood-to-plasma (WB/P) ratio. No studies previously evaluated predictive models utilizing empirically-derived WB/P ratios, or whole blood cannabinoid pharmacokinetics after subchronic THC dosing. Ten male chronic, daily cannabis smokers received escalating around-the-clock oral THC (40-120 mg daily) for 8 days. Cannabinoids were quantified in whole blood and plasma by two-dimensional gas chromatography-mass spectrometry. Maximum whole blood THC occurred 3.0 h after the first oral THC dose and 103.5h (4.3 days) during multiple THC dosing. Median WB/P ratios were THC 0.63 (n=196), 11-hydroxy-THC 0.60 (n=189), and 11-nor-9-carboxy-THC (THCCOOH) 0.55 (n=200). Predictive models utilizing these WB/P ratios accurately estimated last cannabis exposure in 96% and 100% of specimens collected within 1-5h after a single oral THC dose and throughout multiple dosing, respectively. Models were only 60% and 12.5% accurate 12.5 and 22.5h after the last THC dose, respectively. Predictive models estimating time since last cannabis intake from whole blood and plasma cannabinoid concentrations were inaccurate during abstinence, but highly accurate during active THC dosing. THC redistribution from large cannabinoid body stores and high circulating THCCOOH concentrations create different pharmacokinetic profiles than those in less than daily cannabis smokers that were used to derive the models. Thus, the models do not accurately predict time of last THC intake in individuals consuming THC daily. Published by Elsevier Ireland Ltd.
Electricity Markets, Smart Grids and Smart Buildings
NASA Astrophysics Data System (ADS)
Falcey, Jonathan M.
A smart grid is an electricity network that accommodates two-way power flows, and utilizes two-way communications and increased measurement, in order to provide more information to customers and aid in the development of a more efficient electricity market. The current electrical network is outdated and has many shortcomings relating to power flows, inefficient electricity markets, generation/supply balance, a lack of information for the consumer and insufficient consumer interaction with electricity markets. Many of these challenges can be addressed with a smart grid, but there remain significant barriers to the implementation of a smart grid. This paper proposes a novel method for the development of a smart grid utilizing a bottom up approach (starting with smart buildings/campuses) with the goal of providing the framework and infrastructure necessary for a smart grid instead of the more traditional approach (installing many smart meters and hoping a smart grid emerges). This novel approach involves combining deterministic and statistical methods in order to accurately estimate building electricity use down to the device level. It provides model users with a cheaper alternative to energy audits and extensive sensor networks (the current methods of quantifying electrical use at this level) which increases their ability to modify energy consumption and respond to price signals The results of this method are promising, but they are still preliminary. As a result, there is still room for improvement. On days when there were no missing or inaccurate data, this approach has R2 of about 0.84, sometimes as high as 0.94 when compared to measured results. However, there were many days where missing data brought overall accuracy down significantly. In addition, the development and implementation of the calibration process is still underway and some functional additions must be made in order to maximize accuracy. The calibration process must be completed before a reliable accuracy can be determined. While this work shows that a combination of a deterministic and statistical methods can accurately forecast building energy usage, the ability to produce accurate results is heavily dependent upon software availability, accurate data and the proper calibration of the model. Creating the software required for a smart building model is time consuming and expensive. Bad or missing data have significant negative impacts on the accuracy of the results and can be caused by a hodgepodge of equipment and communication protocols. Proper calibration of the model is essential to ensure that the device level estimations are sufficiently accurate. Any building model which is to be successful at creating a smart building must be able to overcome these challenges.
Knox, K; Kerber, Charles W; Singel, S A; Bailey, M J; Imbesi, S G
2005-05-01
Our goal was to develop and prove the accuracy of a system that would allow us to re-create live patient arterial pathology. Anatomically accurate replicas of blood vessels could allow physicians to teach and practice dangerous interventional techniques and might also be used to gather basic physiologic information. The preparation of replicas has, until now, depended on acquisition of fresh cadaver material. Using rapid prototyping, it should be able to replicate vascular pathology in a live patient. We obtained CT angiographic scan data from two patients with known arterial abnormalities. We took such data and, using proprietary software, created a 3D replica using a commercially available rapid prototyping machine. From the prototypes, using a lost wax technique, we created vessel replicas, placed those replicas in the CT scanner, then compared those images with the original scans. Comparison of the images made directly from the patient and from the replica showed that with each step, the relationships were maintained, remaining within 3% of the original, but some smoothing occurred in the final computer manipulation. From routinely obtainable CT angiographic data, it is possible to create accurate replicas of human vascular pathology with the aid of commercially available stereolithography equipment. Visual analysis of the images appeared to be as important as the measurements. With 64 and 128 slice detector scanners becoming available, acquisition times fall enough that we should be able to model rapidly moving structures such as the aortic root. (c) 2005 Wiley-Liss, Inc.
Machine Learning Techniques for Prediction of Early Childhood Obesity.
Dugan, T M; Mukhopadhyay, S; Carroll, A; Downs, S
2015-01-01
This paper aims to predict childhood obesity after age two, using only data collected prior to the second birthday by a clinical decision support system called CHICA. Analyses of six different machine learning methods: RandomTree, RandomForest, J48, ID3, Naïve Bayes, and Bayes trained on CHICA data show that an accurate, sensitive model can be created. Of the methods analyzed, the ID3 model trained on the CHICA dataset proved the best overall performance with accuracy of 85% and sensitivity of 89%. Additionally, the ID3 model had a positive predictive value of 84% and a negative predictive value of 88%. The structure of the tree also gives insight into the strongest predictors of future obesity in children. Many of the strongest predictors seen in the ID3 modeling of the CHICA dataset have been independently validated in the literature as correlated with obesity, thereby supporting the validity of the model. This study demonstrated that data from a production clinical decision support system can be used to build an accurate machine learning model to predict obesity in children after age two.
Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates
Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; ...
2013-03-07
In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less
Progress in building a cognitive vision system
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Yue, Hong
2016-05-01
We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.
3D printing from MRI Data: Harnessing strengths and minimizing weaknesses.
Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J
2017-03-01
3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine.
Electron Impact Ionization: A New Parameterization for 100 eV to 1 MeV Electrons
NASA Technical Reports Server (NTRS)
Fang, Xiaohua; Randall, Cora E.; Lummerzheim, Dirk; Solomon, Stanley C.; Mills, Michael J.; Marsh, Daniel; Jackman, Charles H.; Wang, Wenbin; Lu, Gang
2008-01-01
Low, medium and high energy electrons can penetrate to the thermosphere (90-400 km; 55-240 miles) and mesosphere (50-90 km; 30-55 miles). These precipitating electrons ionize that region of the atmosphere, creating positively charged atoms and molecules and knocking off other negatively charged electrons. The precipitating electrons also create nitrogen-containing compounds along with other constituents. Since the electron precipitation amounts change within minutes, it is necessary to have a rapid method of computing the ionization and production of nitrogen-containing compounds for inclusion in computationally-demanding global models. A new methodology has been developed, which has parameterized a more detailed model computation of the ionizing impact of precipitating electrons over the very large range of 100 eV up to 1,000,000 eV. This new parameterization method is more accurate than a previous parameterization scheme, when compared with the more detailed model computation. Global models at the National Center for Atmospheric Research will use this new parameterization method in the near future.
A Self-Folding Hydrogel In Vitro Model for Ductal Carcinoma
Kwag, Hye Rin; Serbo, Janna V.; Korangath, Preethi; Sukumar, Saraswati
2016-01-01
A significant challenge in oncology is the need to develop in vitro models that accurately mimic the complex microenvironment within and around normal and diseased tissues. Here, we describe a self-folding approach to create curved hydrogel microstructures that more accurately mimic the geometry of ducts and acini within the mammary glands, as compared to existing three-dimensional block-like models or flat dishes. The microstructures are composed of photopatterned bilayers of poly (ethylene glycol) diacrylate (PEGDA), a hydrogel widely used in tissue engineering. The PEGDA bilayers of dissimilar molecular weights spontaneously curve when released from the underlying substrate due to differential swelling ratios. The photopatterns can be altered via AutoCAD-designed photomasks so that a variety of ductal and acinar mimetic structures can be mass-produced. In addition, by co-polymerizing methacrylated gelatin (methagel) with PEGDA, microstructures with increased cell adherence are synthesized. Biocompatibility and versatility of our approach is highlighted by culturing either SUM159 cells, which were seeded postfabrication, or MDA-MB-231 cells, which were encapsulated in hydrogels; cell viability is verified over 9 and 15 days, respectively. We believe that self-folding processes and associated tubular, curved, and folded constructs like the ones demonstrated here can facilitate the design of more accurate in vitro models for investigating ductal carcinoma. PMID:26831041
A Self-Folding Hydrogel In Vitro Model for Ductal Carcinoma.
Kwag, Hye Rin; Serbo, Janna V; Korangath, Preethi; Sukumar, Saraswati; Romer, Lewis H; Gracias, David H
2016-04-01
A significant challenge in oncology is the need to develop in vitro models that accurately mimic the complex microenvironment within and around normal and diseased tissues. Here, we describe a self-folding approach to create curved hydrogel microstructures that more accurately mimic the geometry of ducts and acini within the mammary glands, as compared to existing three-dimensional block-like models or flat dishes. The microstructures are composed of photopatterned bilayers of poly (ethylene glycol) diacrylate (PEGDA), a hydrogel widely used in tissue engineering. The PEGDA bilayers of dissimilar molecular weights spontaneously curve when released from the underlying substrate due to differential swelling ratios. The photopatterns can be altered via AutoCAD-designed photomasks so that a variety of ductal and acinar mimetic structures can be mass-produced. In addition, by co-polymerizing methacrylated gelatin (methagel) with PEGDA, microstructures with increased cell adherence are synthesized. Biocompatibility and versatility of our approach is highlighted by culturing either SUM159 cells, which were seeded postfabrication, or MDA-MB-231 cells, which were encapsulated in hydrogels; cell viability is verified over 9 and 15 days, respectively. We believe that self-folding processes and associated tubular, curved, and folded constructs like the ones demonstrated here can facilitate the design of more accurate in vitro models for investigating ductal carcinoma.
Shepard, Lauren; Sommer, Kelsey; Izzo, Richard; Podgorsak, Alexander; Wilson, Michael; Said, Zaid; Rybicki, Frank J; Mitsouras, Dimitrios; Rudin, Stephen; Angel, Erin; Ionita, Ciprian N
2017-02-11
Accurate patient-specific phantoms for device testing or endovascular treatment planning can be 3D printed. We expand the applicability of this approach for cardiovascular disease, in particular, for CT-geometry derived benchtop measurements of Fractional Flow Reserve, the reference standard for determination of significant individual coronary artery atherosclerotic lesions. Coronary CT Angiography (CTA) images during a single heartbeat were acquired with a 320×0.5mm detector row scanner (Toshiba Aquilion ONE). These coronary CTA images were used to create 4 patient-specific cardiovascular models with various grades of stenosis: severe, <75% (n=1); moderate, 50-70% (n=1); and mild, <50% (n=2). DICOM volumetric images were segmented using a 3D workstation (Vitrea, Vital Images); the output was used to generate STL files (using AutoDesk Meshmixer), and further processed to create 3D printable geometries for flow experiments. Multi-material printed models (Stratasys Connex3) were connected to a programmable pulsatile pump, and the pressure was measured proximal and distal to the stenosis using pressure transducers. Compliance chambers were used before and after the model to modulate the pressure wave. A flow sensor was used to ensure flow rates within physiological reported values. 3D model based FFR measurements correlated well with stenosis severity. FFR measurements for each stenosis grade were: 0.8 severe, 0.7 moderate and 0.88 mild. 3D printed models of patient-specific coronary arteries allows for accurate benchtop diagnosis of FFR. This approach can be used as a future diagnostic tool or for testing CT image-based FFR methods.
Learning-based stochastic object models for characterizing anatomical variations
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua
2018-03-01
It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirey, R; Wu, H
2016-06-15
Purpose: Treatment planning systems (TPS) may not accurately model superficial dose distributions of range shifted proton pencil beam scanning (PBS) treatments. Numerous patient-specific QA tests performed on superficially treated PBS plans have shown a consistent overestimate of dose by the TPS. This study quantifies variations between TPS planned dose and measured dose as a function of range shifter air gap and treatment depths up to 5 cm. Methods: PBS treatment plans were created in the TPS to uniformly irradiate a volume of solid water. One plan was created for each range shifter position analyzed, and all plans utilized identical dosemore » optimization parameters. Each optimized plan was analyzed in the TPS to determine the planned dose at varying depths. A PBS proton therapy system with a 3.5 cm lucite range shifter delivered the treatment plans, and a parallel plate chamber embedded in RW3 solid water measured dose at shallow depths for each air gap. Differences between measured and planned doses were plotted and analyzed. Results: The data show that the TPS more accurately models superficial dose as the air gap between the range shifter and patient surface decreases. Air gaps less than 10 cm have an average dose difference of only 1.6%, whereas air gaps between 10 and 20 cm differ by 3.0% and gaps greater than 20 cm differ by 4.4%. Conclusion: This study has shown that the TPS is unable to accurately model superficial dose with a large range shifter air gap. Dose differences greater than 3% will likely cause QA failure, as many institutions analyze patient QA with a 3%/3mm gamma analysis. For superficial PBS therapy, range shifter positions should be chosen to keep the air gap less then 10 cm when patient setup and gantry geometry allow.« less
AtomDB: Expanding an Accessible and Accurate Atomic Database for X-ray Astronomy
NASA Astrophysics Data System (ADS)
Smith, Randall
Since its inception in 2001, the AtomDB has become the standard repository of accurate and accessible atomic data for the X-ray astrophysics community, including laboratory astrophysicists, observers, and modelers. Modern calculations of collisional excitation rates now exist - and are in AtomDB - for all abundant ions in a hot plasma. AtomDB has expanded beyond providing just a collisional model, and now also contains photoionization data from XSTAR as well as a charge exchange model, amongst others. However, building and maintaining an accurate and complete database that can fully exploit the diagnostic potential of high-resolution X-ray spectra requires further work. The Hitomi results, sadly limited as they were, demonstrated the urgent need for the best possible wavelength and rate data, not merely for the strongest lines but for the diagnostic features that may have 1% or less of the flux of the strong lines. In particular, incorporation of weak but powerfully diagnostic satellite lines will be crucial to understanding the spectra expected from upcoming deep observations with Chandra and XMM-Newton, as well as the XARM and Athena satellites. Beyond incorporating this new data, a number of groups, both experimental and theoretical, have begun to produce data with errors and/or sensitivity estimates. We plan to use this to create statistically meaningful spectral errors on collisional plasmas, providing practical uncertainties together with model spectra. We propose to continue to (1) engage the X-ray astrophysics community regarding their issues and needs, notably by a critical comparison with other related databases and tools, (2) enhance AtomDB to incorporate a large number of satellite lines as well as updated wavelengths with error estimates, (3) continue to update the AtomDB with the latest calculations and laboratory measurements, in particular velocity-dependent charge exchange rates, and (4) enhance existing tools, and create new ones as needed to increase the functionality of, and access to, AtomDB.
Design, construction, and evaluation of a 1:8 scale model binaural manikin.
Robinson, Philip; Xiang, Ning
2013-03-01
Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.
Hatten, James R.; Batt, Thomas R.
2010-01-01
We used a two-dimensional (2D) hydrodynamic model to simulate and compare the hydraulic characteristics in a 74-km reach of the Columbia River (the Bonneville Reach) before and after construction of Bonneville Dam. For hydrodynamic modeling, we created a bathymetric layer of the Bonneville Reach from single-beam and multi-beam echo-sounder surveys, digital elevation models, and navigation surveys. We calibrated the hydrodynamic model at 100 and 300 kcfs with a user-defined roughness layer, a variable-sized mesh, and a U.S. Army Corps of Engineers backwater curve. We verified the 2D model with acoustic Doppler current profiler (ADCP) data at 14 transects and three flows. The 2D model was 88% accurate for water depths, and 77% accurate for velocities. We verified a pre-dam 2D model run at 126 kcfs using pre-dam aerial photos from September 1935. Hydraulic simulations indicated that mean water depths in the Bonneville Reach increased by 34% following dam construction, while mean velocities decreased by 58%. There are numerous activities that would benefit from data output from the 2D model, including biological sampling, bioenergetics, and spatially explicit habitat modeling.
An Examination of the Evolution of Radiation and Advection Fogs
1993-01-01
and fog diagnostic and prediction models have developed in sophistication so that they can reproduce fairly accurate one- or two-dimensional...occurred only by molecular diffusion near the interface created between the species during the mixing process. The rate of homogenization is minimal until...of excess vapor by molecular diffusion at the interfaces of nearly saturated air mixing in eddies is faster than the relaxation time of droplet
MyPMFs: a simple tool for creating statistical potentials to assess protein structural models.
Postic, Guillaume; Hamelryck, Thomas; Chomilier, Jacques; Stratmann, Dirk
2018-05-29
Evaluating the model quality of protein structures that evolve in environments with particular physicochemical properties requires scoring functions that are adapted to their specific residue compositions and/or structural characteristics. Thus, computational methods developed for structures from the cytosol cannot work properly on membrane or secreted proteins. Here, we present MyPMFs, an easy-to-use tool that allows users to train statistical potentials of mean force (PMFs) on the protein structures of their choice, with all parameters being adjustable. We demonstrate its use by creating an accurate statistical potential for transmembrane protein domains. We also show its usefulness to study the influence of the physical environment on residue interactions within protein structures. Our open-source software is freely available for download at https://github.com/bibip-impmc/mypmfs. Copyright © 2018. Published by Elsevier B.V.
Application of Fused Deposition Modelling (FDM) Method of 3D Printing in Drug Delivery.
Long, Jingjunjiao; Gholizadeh, Hamideh; Lu, Jun; Bunt, Craig; Seyfoddin, Ali
2017-01-01
Three-dimensional (3D) printing is an emerging manufacturing technology for biomedical and pharmaceutical applications. Fused deposition modelling (FDM) is a low cost extrusion-based 3D printing technique that can deposit materials layer-by-layer to create solid geometries. This review article aims to provide an overview of FDM based 3D printing application in developing new drug delivery systems. The principle methodology, suitable polymers and important parameters in FDM technology and its applications in fabrication of personalised tablets and drug delivery devices are discussed in this review. FDM based 3D printing is a novel and versatile manufacturing technique for creating customised drug delivery devices that contain accurate dose of medicine( s) and provide controlled drug released profiles. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F
2016-09-01
The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.
Probabilistic/Fracture-Mechanics Model For Service Life
NASA Technical Reports Server (NTRS)
Watkins, T., Jr.; Annis, C. G., Jr.
1991-01-01
Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.
Changing culture in the home health setting: strategies for success.
Boan, David
2006-01-01
Organizational culture is generally defined as the internal attributes of the staff, such as their values, beliefs, and attitudes. Although technically accurate as a definition, personal attributes defy direct intervention, leading some to question whether it is possible to change culture. It is proposed that it is possible to change the personal internal attributes that define organizational culture by changing the characteristic structures and behaviors of the organization that shape those attributes. This model, called the Quality Capability Model, creates an approach to culture change that accommodates the unique features of home health.
Software Tools for Emittance Measurement and Matching for 12 GeV CEBAF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Dennis L.
2016-05-01
This paper discusses model-driven setup of the Continuous Electron Beam Accelerator Facility (CEBAF) for the 12GeV era, focusing on qsUtility. qsUtility is a set of software tools created to perform emittance measurements, analyze those measurements, and compute optics corrections based upon the measurements.qsUtility was developed as a toolset to facilitate reducing machine configuration time and reproducibility by way of an accurate accelerator model, and to provide Operations staff with tools to measure and correct machine optics with little or no assistance from optics experts.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
Fluid{Structure Interaction Modeling of Modified-Porosity Parachutes and Parachute Clusters
NASA Astrophysics Data System (ADS)
Boben, Joseph J.
To increase aerodynamic performance, the geometric porosity of a ringsail spacecraft parachute canopy is sometimes increased, beyond the "rings" and "sails" with hundreds of "ring gaps" and "sail slits." This creates extra computational challenges for fluid-structure interaction (FSI) modeling of clusters of such parachutes, beyond those created by the lightness of the canopy structure, geometric complexities of hundreds of gaps and slits, and the contact between the parachutes of the cluster. In FSI computation of parachutes with such "modified geometric porosity," the ow through the "windows" created by the removal of the panels and the wider gaps created by the removal of the sails cannot be accurately modeled with the Homogenized Modeling of Geometric Porosity (HMGP), which was introduced to deal with the hundreds of gaps and slits. The ow needs to be actually resolved. All these computational challenges need to be addressed simultaneously in FSI modeling of clusters of spacecraft parachutes with modified geometric porosity. The core numerical technology is the Stabilized Space-Time FSI (SSTFSI) technique, and the contact between the parachutes is handled with the Surface-Edge-Node Contact Tracking (SENCT) technique. In the computations reported here, in addition to the SSTFSI and SENCT techniques and HMGP, we use the special techniques we have developed for removing the numerical spinning component of the parachute motion and for restoring the mesh integrity without a remesh. We present results for 2- and 3-parachute clusters with two different payload models. We also present the FSI computations we carried out for a single, subscale modified-porosity parachute.
Spectra of Full 3-D PIC Simulations of Finite Meteor Trails
NASA Astrophysics Data System (ADS)
Tarnecki, L. K.; Oppenheim, M. M.
2016-12-01
Radars detect plasma trails created by the billions of small meteors that impact the Earth's atmosphere daily, returning data used to infer characteristics of the meteoroid population and upper atmosphere. Researchers use models to investigate the dynamic evolution of the trails. Previously, all models assumed a trail of infinite length, due to the constraints of simulation techniques. We present the first simulations of 3D meteor trails of finite length. This change more accurately captures the physics of the trails. We characterize the turbulence that develops as the trail evolves and study the effects of varying the external electric field, altitude, and initial density. The simulations show that turbulence develops in all cases, and that trails travel with the neutral wind rather than electric field. Our results will allow us to draw more detailed and accurate information from non-specular radar observations of meteors.
Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A
2014-12-23
To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p < 0.05) for VMAT AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy -1.5 Gy; p < 0.05). An apparent difference in TCP of between 1.2% and 3.1% was found depending on the choice of TCP model. OAR mean dose was lower in the AXB recalculated plan than the AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans. Differences in dose distribution are observed with VMAT and CRT plans recalculated with AXB particularly within soft tissue at the tumour/lung interface, where AXB has been shown to more accurately represent the true dose distribution. AAA apparently overestimates dose, particularly the PTV median dose and GTV mean dose, which could result in a difference in TCP model parameters that reaches clinical significance.
Christ, Roxie; Guevar, Julien; Poyade, Matthieu; Rea, Paul M
2018-01-01
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond.
Christ, Roxie; Guevar, Julien; Poyade, Matthieu
2018-01-01
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond. PMID:29698413
Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation
NASA Astrophysics Data System (ADS)
Ekin Aydin, Boran; Rutten, Martine
2016-04-01
Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.
Torcasio, Antonia; Zhang, Xiaolei; Van Oosterwyck, Hans; Duyck, Joke; van Lenthe, G Harry
2012-05-01
Although research has been addressed at investigating the effect of specific loading regimes on bone response around the implant, a precise quantitative understanding of the local mechanical response close to the implant site is still lacking. This study was aimed at validating micro-CT-based finite element (μFE) models to assess tissue strains after implant placement in a rat tibia. Small implants were inserted at the medio-proximal site of 8 rat tibiae. The limbs were subjected to axial compression loading; strain close to the implant was measured by means of strain gauges. Specimen-specific μFE models were created and analyzed. For each specimen, 4 different models were created corresponding to different representations of the bone-implant interface: bone and implant were assumed fully osseointegrated (A); a low stiffness interface zone was assumed with thickness of 40 μm (B), 80 μm (C), and 160 μm (D). In all cases, measured and computational strains correlated highly (R (2) = 0.95, 0.92, 0.93, and 0.95 in A, B, C, and D, respectively). The averaged calculated strains were 1.69, 1.34, and 1.15 times higher than the measured strains for A, B, and C, respectively, and lower than the experimental strains for D (factor = 0.91). In conclusion, we demonstrated that specimen-specific FE analyses provide accurate estimates of peri-implant bone strains in the rat tibia loading model. Further investigations of the bone-implant interface are needed to quantify implant osseointegration.
The undergraduate research fellows program: a unique model to promote engagement in research.
Vessey, Judith A; DeMarco, Rosanna F
2008-01-01
Well-educated nurses with research expertise are needed to advance evidence-based nursing practice. A primary goal of undergraduate nursing curricula is to create meaningful participatory experiences to help students develop a research skill set that articulates with rapid career advancement of gifted, young graduates interested in nursing research and faculty careers. Three research enrichment models-undergraduate honors programs, research assistant work-for-hire programs, and research work/mentorship programs-to be in conjunction with standard research content are reviewed. The development and implementation of one research work/mentorship program, the Boston College undergraduate research fellows program (UGRF), is explicated. This process included surveying previous UGRFs followed by creating a retreat and seminars to address specific research skill sets. The research skill sets included (a) how to develop a research team, (b) accurate data retrieval, (c) ethical considerations, (d) the research process, (e) data management, (f) successful writing of abstracts, and (g) creating effective poster presentations. Outcomes include evidence of involvement in research productivity and valuing of evidenced-based practice through the UGRF mentorship process with faculty partners.
Anatomical evaluation and stress distribution of intact canine femur.
Verim, Ozgur; Tasgetiren, Suleyman; Er, Mehmet S; Ozdemir, Vural; Yuran, Ahmet F
2013-03-01
In the biomedical field, three-dimensional (3D) modeling and analysis of bones and tissues has steadily gained in importance. The aim of this study was to produce more accurate 3D models of the canine femur derived from computed tomography (CT) data by using several modeling software programs and two different methods. The accuracy of the analysis depends on the modeling process and the right boundary conditions. Solidworks, Rapidform, Inventor, and 3DsMax software programs were used to create 3D models. Data derived from CT were converted into 3D models using two different methods: in the first, 3D models were generated using boundary lines, while in the second, 3D models were generated using point clouds. Stress analyses in the models were made by ANSYS v12, also considering any muscle forces acting on the canine femur. When stress values and statistical values were taken into consideration, more accurate models were obtained with the point cloud method. It was found that the maximum von Mises stress on the canine femur shaft was 34.8 MPa. Stress and accuracy values were obtained from the model formed using the Rapidform software. The values obtained were similar to those in other studies in the literature. Copyright © 2012 John Wiley & Sons, Ltd.
The IRGen infrared data base modeler
NASA Technical Reports Server (NTRS)
Bernstein, Uri
1993-01-01
IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
Codependency: a feminist perspective.
Malloy, G B; Berkery, A C
1993-04-01
1. Our understanding of psychological life has been underdeveloped and distorted because explanations have been created by only one half of the human species. The current broad and encompassing disease definition of codependency may devalue some of women's greatest strengths. 2. The disease model of codependency, it may be argued, is rooted in extant, reductionist models that suggest a developmental pathway of separation and individuation leading to an autonomous and independent maturity. 3. The feminist model of Growth in Connection proposes that the flowering of the self occurs within the development and maintenance of relationships in which accurate and mutual empathy is both the goal and the motivation toward growth.
A comprehensive combustion model for biodiesel-fueled engine simulations
NASA Astrophysics Data System (ADS)
Brakora, Jessica L.
Engine models for alternative fuels are available, but few are comprehensive, well-validated models that include accurate physical property data as well as a detailed description of the fuel chemistry. In this work, a comprehensive biodiesel combustion model was created for use in multi-dimensional engine simulations, specifically the KIVA3v R2 code. The model incorporates realistic physical properties in a vaporization model developed for multi-component fuel sprays and applies an improved mechanism for biodiesel combustion chemistry. A reduced mechanism was generated from the methyl decanoate (MD) and methyl-9-decenoate (MD9D) mechanism developed at Lawrence Livermore National Laboratory. It was combined with a multi-component mechanism to include n-heptane in the fuel chemistry. The biodiesel chemistry was represented using a combination of MD, MD9D and n-heptane, which varied for a given fuel source. The reduced mechanism, which contained 63 species, accurately predicted ignition delay times of the detailed mechanism over a range of engine-specific operating conditions. Physical property data for the five methyl ester components of biodiesel were added to the KIVA library. Spray simulations were performed to ensure that the models adequately reproduce liquid penetration observed in biodiesel spray experiments. Fuel composition impacted liquid length as expected, with saturated species vaporizing more and penetrating less. Distillation curves were created to ensure the fuel vaporization process was comparable to available data. Engine validation was performed against a low-speed, high-load, conventional combustion experiments and the model was able to predict the performance and NOx formation seen in the experiment. High-speed, low-load, low-temperature combustion conditions were also modeled, and the emissions (HC, CO, NOx) and fuel consumption were well-predicted for a sweep of injection timings. Finally, comparisons were made between the results of biodiesel composition (palm vs. soy) and fuel blends (neat vs. B20). The model effectively reproduced the trends observed in the experiments.
3D-Printed Patient-Specific ACL Femoral Tunnel Guide from MRI.
Rankin, Iain; Rehman, Haroon; Frame, Mark
2018-01-01
Traditional ACL reconstruction with non-anatomic techniques can demonstrate unsatisfactory long-term outcomes with regards instability and the degenerative knee changes observed with these results. Anatomic ACL reconstruction attempts to closely reproduce the patient's individual anatomic characteristics with the aim of restoring knee kinematics, in order to improve patient short and long-term outcomes. We designed an arthroscopic, patient-specific, ACL femoral tunnel guide to aid anatomical placement of the ACL graft within the femoral tunnel. The guide design was based on MRI scan of the subject's uninjured contralateral knee, identifying the femoral footprint and its anatomical position relative to the borders of the femoral articular cartilage. Image processing software was used to create a 3D computer aided design which was subsequently exported to a 3D-printing service. Transparent acrylic based photopolymer, PA220 plastic and 316L stainless steel patient-specific ACL femoral tunnel guides were created; the models produced were accurate with no statistical difference in size and positioning of the center of the ACL femoral footprint guide to MRI ( p =0.344, p =0.189, p =0.233 respectively). The guides aim to provide accurate marking of the starting point of the femoral tunnel in arthroscopic ACL reconstruction. This study serves as a proof of concept for the accurate creation of 3D-printed patient-specific guides for the anatomical placement of the femoral tunnel during ACL reconstruction.
NASA Astrophysics Data System (ADS)
Shepard, Lauren; Sommer, Kelsey; Izzo, Richard; Podgorsak, Alexander; Wilson, Michael; Said, Zaid; Rybicki, Frank J.; Mitsouras, Dimitrios; Rudin, Stephen; Angel, Erin; Ionita, Ciprian N.
2017-03-01
Purpose: Accurate patient-specific phantoms for device testing or endovascular treatment planning can be 3D printed. We expand the applicability of this approach for cardiovascular disease, in particular, for CT-geometry derived benchtop measurements of Fractional Flow Reserve, the reference standard for determination of significant individual coronary artery atherosclerotic lesions. Materials and Methods: Coronary CT Angiography (CTA) images during a single heartbeat were acquired with a 320x0.5mm detector row scanner (Toshiba Aquilion ONE). These coronary CTA images were used to create 4 patientspecific cardiovascular models with various grades of stenosis: severe, <75% (n=1); moderate, 50-70% (n=1); and mild, <50% (n=2). DICOM volumetric images were segmented using a 3D workstation (Vitrea, Vital Images); the output was used to generate STL files (using AutoDesk Meshmixer), and further processed to create 3D printable geometries for flow experiments. Multi-material printed models (Stratasys Connex3) were connected to a programmable pulsatile pump, and the pressure was measured proximal and distal to the stenosis using pressure transducers. Compliance chambers were used before and after the model to modulate the pressure wave. A flow sensor was used to ensure flow rates within physiological reported values. Results: 3D model based FFR measurements correlated well with stenosis severity. FFR measurements for each stenosis grade were: 0.8 severe, 0.7 moderate and 0.88 mild. Conclusions: 3D printed models of patient-specific coronary arteries allows for accurate benchtop diagnosis of FFR. This approach can be used as a future diagnostic tool or for testing CT image-based FFR methods.
SU-C-303-03: Dosimetric Model of the Beagle Needed for Pre-Clinical Testing of Radiopharmaceuticals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, M; Sands, M; Bolch, W
2015-06-15
Purpose: Large animal models, most popularly beagles, have been crucial surrogates to humans in determining radiation safety levels of radiopharmaceuticals. This study aims to develop a detailed beagle phantom to accurately approximate organ absorbed doses for therapy nuclear medicine preclinical studies. Methods: A 3D NURBS model was created subordinate to a whole body CT of an adult beagle. Bones were harvested and CT imaged to offer macroscopic skeletal detail. Samples of trabecular spongiosa were cored and imaged to offer microscopic skeletal detail for bone trabeculae and marrow volume fractions. Results: Organ masses in the model are typical of an adultmore » beagle. Trends in volume fractions for skeletal dosimetry are fundamentally similar to those found in existing models of other canine species. Conclusion: This work warrants its use in further investigations of radiation transport calculation for electron and photon dosimetry. This model accurately represents the anatomy of a beagle, and can be directly translated into a useable geometry for a voxel-based Monte Carlo radiation transport program such as MCNP6. Work supported by a grant from the Hyundai Hope on Wheels Foundation for Pediatric Cancer Research.« less
Modeling disease transmission near eradication: An equation free approach
NASA Astrophysics Data System (ADS)
Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan
2015-01-01
Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.
BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciambur, B. C., E-mail: bciambur@swin.edu.au
2015-09-10
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial,more » cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.« less
Kiran, Ravi P; Attaluri, Vikram; Hammel, Jeff; Church, James
2013-05-01
The ability to accurately predict postoperative mortality is expected to improve preoperative decisions for elderly patients considered for colorectal surgery. Patients undergoing colorectal surgery were identified from the National Surgical Quality Improvement Program database (2005-2007) and stratified as elderly (>70 years) and nonelderly (<70 years). Univariate analysis of preoperative risk factors and 30-day mortality and morbidity were analyzed on 70% of the population. A nomogram for mortality was created and tested on the remaining 30%. Of 30,900 colorectal cases, 10,750 were elderly (>70 years). Mortality increased steadily with age (0.5% every 5 years) and at a faster rate (1.2% every 5 years) after 70 years, which defined "elderly" in this study. Elderly (mean age: 78.4 years) and nonelderly patients (52.8 years) had mortality of 7.6% versus 2.0% and a morbidity of 32.8% versus 25.7%, respectively. Elderly patients had greater preoperative comorbidities including chronic obstructive pulmonary disease (10.5% vs 3.8%), diabetes (18.7% vs 11.1%), and renal insufficiency (1.7% vs 1.3%). A multivariate model for 30-day mortality and nomogram were created. Increasing age was associated with mortality [age >70 years: odds ratio (OR) = 2.0 (95% confidence interval (CI): 1.7-2.4); >85 years: OR = 4.3 (95% CI: 3.3-5.5)]. The nomogram accurately predicted mortality, including very high-risk (>50% mortality) with a concordant index for this model of 0.89. Colorectal surgery in elderly patients is associated with significantly higher mortality. This novel nomogram that predicts postoperative mortality may facilitate preoperative treatment decisions.
NASA Astrophysics Data System (ADS)
Van Gordon, M.; Van Gordon, S.; Min, A.; Sullivan, J.; Weiner, Z.; Tappan, G. G.
2017-12-01
Using support vector machine (SVM) learning and high-accuracy hand-classified maps, we have developed a publicly available land cover classification tool for the West African Sahel. Our classifier produces high-resolution and regionally calibrated land cover maps for the Sahel, representing a significant contribution to the data available for this region. Global land cover products are unreliable for the Sahel, and accurate land cover data for the region are sparse. To address this gap, the U.S. Geological Survey and the Regional Center for Agriculture, Hydrology and Meteorology (AGRHYMET) in Niger produced high-quality land cover maps for the region via hand-classification of Landsat images. This method produces highly accurate maps, but the time and labor required constrain the spatial and temporal resolution of the data products. By using these hand-classified maps alongside SVM techniques, we successfully increase the resolution of the land cover maps by 1-2 orders of magnitude, from 2km-decadal resolution to 30m-annual resolution. These high-resolution regionally calibrated land cover datasets, along with the classifier we developed to produce them, lay the foundation for major advances in studies of land surface processes in the region. These datasets will provide more accurate inputs for food security modeling, hydrologic modeling, analyses of land cover change and climate change adaptation efforts. The land cover classification tool we have developed will be publicly available for use in creating additional West Africa land cover datasets with future remote sensing data and can be adapted for use in other parts of the world.
System analysis through bond graph modeling
NASA Astrophysics Data System (ADS)
McBride, Robert Thomas
2005-07-01
Modeling and simulation form an integral role in the engineering design process. An accurate mathematical description of a system provides the design engineer the flexibility to perform trade studies quickly and accurately to expedite the design process. Most often, the mathematical model of the system contains components of different engineering disciplines. A modeling methodology that can handle these types of systems might be used in an indirect fashion to extract added information from the model. This research examines the ability of a modeling methodology to provide added insight into system analysis and design. The modeling methodology used is bond graph modeling. An investigation into the creation of a bond graph model using the Lagrangian of the system is provided. Upon creation of the bond graph, system analysis is performed. To aid in the system analysis, an object-oriented approach to bond graph modeling is introduced. A framework is provided to simulate the bond graph directly. Through object-oriented simulation of a bond graph, the information contained within the bond graph can be exploited to create a measurement of system efficiency. A definition of system efficiency is given. This measurement of efficiency is used in the design of different controllers of varying architectures. Optimal control of a missile autopilot is discussed within the framework of the calculated system efficiency.
An articulatorily constrained, maximum entropy approach to speech recognition and speech coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, J.
Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less
Nuclear data for r-process models from ion trap measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Jason, E-mail: jclark@anl.gov
2016-06-21
To truly understand how elements are created in the universe via the astrophysical r process, accurate nuclear data are required. Historically, the isotopes involved in the r process have been difficult to access for study, but the development of new facilities and measurement techniques have put many of the r-process isotopes within reach. This paper will discuss the new CARIBU facility at Argonne National Laboratory and two pieces of experimental equipment, the Beta-decay Paul Trap and the Canadian Penning Trap, that will dramatically increase the nuclear data available for models of the astrophysical r process.
Karschner, Erin L.; Schwope, David M.; Schwilke, Eugene W.; Goodwin, Robert S.; Kelly, Deanna L.; Gorelick, David A.; Huestis, Marilyn A.
2012-01-01
Background Determining time since last cannabis/Δ9-tetrahydrocannabinol (THC) exposure is important in clinical, workplace, and forensic settings. Mathematical models calculating time of last exposure from whole blood concentrations typically employ a theoretical 0.5 whole blood-to-plasma (WB/P) ratio. No studies previously evaluated predictive models utilizing empirically-derived WB/P ratios, or whole blood cannabinoid pharmacokinetics after subchronic THC dosing. Methods Ten male chronic, daily cannabis smokers received escalating around-the-clock oral THC (40-120 mg daily) for 8 days. Cannabinoids were quantified in whole blood and plasma by two-dimensional gas chromatography-mass spectrometry. Results Maximum whole blood THC occurred 3.0 h after the first oral THC dose and 103.5 h (4.3 days) during multiple THC dosing. Median WB/P ratios were THC 0.63 (n=196), 11-hydroxy-THC 0.60 (n=189), and 11-nor-9-carboxy-THC (THCCOOH) 0.55 (n=200). Predictive models utilizing these WB/P ratios accurately estimated last cannabis exposure in 96% and 100% of specimens collected within 1-5 h after a single oral THC dose and throughout multiple dosing, respectively. Models were only 60% and 12.5% accurate 12.5 and 22.5 h after the last THC dose, respectively. Conclusions Predictive models estimating time since last cannabis intake from whole blood and plasma cannabinoid concentrations were inaccurate during abstinence, but highly accurate during active THC dosing. THC redistribution from large cannabinoid body stores and high circulating THCCOOH concentrations create different pharmacokinetic profiles than those in less than daily cannabis smokers that were used to derive the models. Thus, the models do not accurately predict time of last THC intake in individuals consuming THC daily. PMID:22464363
Three-dimensional (3D) printed endovascular simulation models: a feasibility study.
Mafeld, Sebastian; Nesbitt, Craig; McCaslin, James; Bagnall, Alan; Davey, Philip; Bose, Pentop; Williams, Rob
2017-02-01
Three-dimensional (3D) printing is a manufacturing process in which an object is created by specialist printers designed to print in additive layers to create a 3D object. Whilst there are initial promising medical applications of 3D printing, a lack of evidence to support its use remains a barrier for larger scale adoption into clinical practice. Endovascular virtual reality (VR) simulation plays an important role in the safe training of future endovascular practitioners, but existing VR models have disadvantages including cost and accessibility which could be addressed with 3D printing. This study sought to evaluate the feasibility of 3D printing an anatomically accurate human aorta for the purposes of endovascular training. A 3D printed model was successfully designed and printed and used for endovascular simulation. The stages of development and practical applications are described. Feedback from 96 physicians who answered a series of questions using a 5 point Likert scale is presented. Initial data supports the value of 3D printed endovascular models although further educational validation is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Bates, Cameron Russell; Mckigney, Edward Allen
Accurate detector modeling is a requirement to design systems in many non-proliferation scenarios; by determining a Detector’s Response Function (DRF) to incident radiation, it is possible characterize measurements of unknown sources. DRiFT is intended to post-process MCNP® output and create realistic detector spectra. Capabilities currently under development include the simulation of semiconductor, gas, and (as is discussed in this work) scintillator detector physics. Energy spectra and pulse shape discrimination (PSD) trends for incident photon and neutron radiation have been reproduced by DRiFT.
Monocular Depth Perception and Robotic Grasping of Novel Objects
2009-06-01
resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show
Regional Climate Modeling over the Marmara Region, Turkey, with Improved Land Cover Data
NASA Astrophysics Data System (ADS)
Sertel, E.; Robock, A.
2007-12-01
Land surface controls the partitioning of available energy at the surface between sensible and latent heat,and controls partitioning of available water between evaporation and runoff. Current land cover data available within the regional climate models such as Regional Atmospheric Modeling System (RAMS), the Fifth-Generation NCAR/Penn State Mesoscale Model (MM5) and Weather Research and Forecasting (WRF) was obtained from 1- km Advanced Very High Resolution Radiometer satellite images spanning April 1992 through March 1993 with an unsupervised classification technique. These data are not up-to-date and are not accurate for all regions and some land cover types such as urban areas. Here we introduce new, up-to-date and accurate land cover data for the Marmara Region, Turkey derived from Landsat Enhanced Thematic Mapper images into the WRF regional climate model. We used several image processing techniques to create accurate land cover data from Landsat images obtained between 2001 and 2005. First, all images were atmospherically and radiometrically corrected to minimize contamination effects of atmospheric particles and systematic errors. Then, geometric correction was performed for each image to eliminate geometric distortions and define images in a common coordinate system. Finally, unsupervised and supervised classification techniques were utilized to form the most accurate land cover data yet for the study area. Accuracy assessments of the classifications were performed using error matrix and kappa statistics to find the best classification results. Maximum likelihood classification method gave the most accurate results over the study area. We compared the new land cover data with the default WRF land cover data. WRF land cover data cannot represent urban areas in the cities of Istanbul, Izmit, and Bursa. As an example, both original satellite images and new land cover data showed the expansion of urban areas into the Istanbul metropolitan area, but in the WRF land cover data only a limited area along the Bosporus is shown as urban. In addition, the new land cover data indicate that the northern part of Istanbul is covered by evergreen and deciduous forest (verified by ground truth data), but the WRF data indicate that most of this region is croplands. In the northern part of the Marmara Region, there is bare ground as a result of open mining activities and this class can be identified in our land cover data, whereas the WRF data indicated this region as woodland. We then used this new data set to conduct WRF simulations for one main and two nested domains, where the inner-most domain represents the Marmara Region with 3 km horizontal resolution. The vertical domain of both main and nested domains extends over 28 vertical levels. Initial and boundary conditions were obtained from National Centers for Environmental Prediction-Department of Energy Reanalysis II and the Noah model was selected as the land surface model. Two model simulations were conducted; one with available land cover data and one with the newly created land cover data. Using detailed meteorological station data within the study area, we find that the simulation with the new land cover data set produces better temperature and precipitation simulations for the region, showing the value of accurate land cover data and that changing land cover data can be an important influence on local climate change.
Palm: Easing the Burden of Analytical Performance Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less
Indoor Modelling Benchmark for 3D Geometry Extraction
NASA Astrophysics Data System (ADS)
Thomson, C.; Boehm, J.
2014-06-01
A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.
Meris, Ronald G; Barbera, Joseph A
2014-01-01
In a large-scale outdoor, airborne, hazardous materials (HAZMAT) incident, such as ruptured chlorine rail cars during a train derailment, the local Incident Commanders and HAZMAT emergency responders must obtain accurate information quickly to assess the situation and act promptly and appropriately. HAZMAT responders must have a clear understanding of key information and how to integrate it into timely and effective decisions for action planning. This study examined the use of HAZMAT plume modeling as a decision support tool during incident action planning in this type of extreme HAZMAT incident. The concept of situation awareness as presented by Endsley's dynamic situation awareness model contains three levels: perception, comprehension, and projection. It was used to examine the actions of incident managers related to adequate data acquisition, current situational understanding, and accurate situation projection. Scientists and engineers have created software to simulate and predict HAZMAT plume behavior, the projected hazard impact areas, and the associated health effects. Incorporating the use of HAZMAT plume projection modeling into an incident action plan may be a complex process. The present analysis used a mixed qualitative and quantitative methodological approach and examined the use and limitations of a "HAZMAT Plume Modeling Cycle" process that can be integrated into the incident action planning cycle. HAZMAT response experts were interviewed using a computer-based simulation. One of the research conclusions indicated the "HAZMAT Plume Modeling Cycle" is a critical function so that an individual/team can be tasked with continually updating the hazard plume model with evolving data, promoting more accurate situation awareness.
NASA Technical Reports Server (NTRS)
Brown, Richard B.; Navard, Andrew R.; Holland, Donald E.; McKellip, Rodney D.; Brannon, David P.
2010-01-01
Barringer Meteorite Crater or Meteor Crater, AZ, has been a site of high interest for lunar and Mars analog crater and terrain studies since the early days of the Apollo-Saturn program. It continues to be a site of exceptional interest to lunar, Mars, and other planetary crater and impact analog studies because of its relatively young age (est. 50 thousand years) and well-preserved structure. High resolution (2 meter to 1 decimeter) digital terrain models of Meteor Crater in whole or in part were created at NASA Stennis Space Center to support several lunar surface analog modeling activities using photogrammetric and ground based laser scanning techniques. The dataset created by this activity provides new and highly accurate 3D models of the inside slope of the crater as well as the downslope rock distribution of the western ejecta field. The data are presented to the science community for possible use in furthering studies of Meteor Crater and impact craters in general as well as its current near term lunar exploration use in providing a beneficial test model for lunar surface analog modeling and surface operation studies.
Modified allocation capacitated planning model in blood supply chain management
NASA Astrophysics Data System (ADS)
Mansur, A.; Vanany, I.; Arvitrida, N. I.
2018-04-01
Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.
Creation of a 3D printed temporal bone model from clinical CT data.
Cohen, Joss; Reyes, Samuel A
2015-01-01
Generate and describe the process of creating a 3D printed, rapid prototype temporal bone model from clinical quality CT images. We describe a technique to create an accurate, alterable, and reproducible rapid prototype temporal bone model using freely available software to segment clinical CT data and generate three different 3D models composed of ABS plastic. Each model was evaluated based on the appearance and size of anatomical structures and response to surgical drilling. Mastoid air cells had retained scaffolding material in the initial versions. This required modifying the model to allow drainage of the scaffolding material. External auditory canal dimensions were similar to those measured from the clinical data. Malleus, incus, oval window, round window, promontory, horizontal semicircular canal, and mastoid segment of the facial nerve canal were identified in all models. The stapes was only partially formed in two models and absent in the third. Qualitative feel of the ABS plastic was softer than bone. The pate produced by drilling was similar to bone dust when appropriate irrigation was used. We present a rapid prototype temporal bone model made based on clinical CT data using 3D printing technology. The model can be made quickly and inexpensively enough to have potential applications for educational training. Copyright © 2015 Elsevier Inc. All rights reserved.
Predicting p Ka values from EEM atomic charges
2013-01-01
The acid dissociation constant p Ka is a very important molecular property, and there is a strong interest in the development of reliable and fast methods for p Ka prediction. We have evaluated the p Ka prediction capabilities of QSPR models based on empirical atomic charges calculated by the Electronegativity Equalization Method (EEM). Specifically, we collected 18 EEM parameter sets created for 8 different quantum mechanical (QM) charge calculation schemes. Afterwards, we prepared a training set of 74 substituted phenols. Additionally, for each molecule we generated its dissociated form by removing the phenolic hydrogen. For all the molecules in the training set, we then calculated EEM charges using the 18 parameter sets, and the QM charges using the 8 above mentioned charge calculation schemes. For each type of QM and EEM charges, we created one QSPR model employing charges from the non-dissociated molecules (three descriptor QSPR models), and one QSPR model based on charges from both dissociated and non-dissociated molecules (QSPR models with five descriptors). Afterwards, we calculated the quality criteria and evaluated all the QSPR models obtained. We found that QSPR models employing the EEM charges proved as a good approach for the prediction of p Ka (63% of these models had R2 > 0.9, while the best had R2 = 0.924). As expected, QM QSPR models provided more accurate p Ka predictions than the EEM QSPR models but the differences were not significant. Furthermore, a big advantage of the EEM QSPR models is that their descriptors (i.e., EEM atomic charges) can be calculated markedly faster than the QM charge descriptors. Moreover, we found that the EEM QSPR models are not so strongly influenced by the selection of the charge calculation approach as the QM QSPR models. The robustness of the EEM QSPR models was subsequently confirmed by cross-validation. The applicability of EEM QSPR models for other chemical classes was illustrated by a case study focused on carboxylic acids. In summary, EEM QSPR models constitute a fast and accurate p Ka prediction approach that can be used in virtual screening. PMID:23574978
NASA Astrophysics Data System (ADS)
Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris
2017-07-01
While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Kourgialas, Nektarios N; Dokou, Zoi; Karatzas, George P
2015-05-01
The purpose of this study was to create a modeling management tool for the simulation of extreme flow events under current and future climatic conditions. This tool is a combination of different components and can be applied in complex hydrogeological river basins, where frequent flood and drought phenomena occur. The first component is the statistical analysis of the available hydro-meteorological data. Specifically, principal components analysis was performed in order to quantify the importance of the hydro-meteorological parameters that affect the generation of extreme events. The second component is a prediction-forecasting artificial neural network (ANN) model that simulates, accurately and efficiently, river flow on an hourly basis. This model is based on a methodology that attempts to resolve a very difficult problem related to the accurate estimation of extreme flows. For this purpose, the available measurements (5 years of hourly data) were divided in two subsets: one for the dry and one for the wet periods of the hydrological year. This way, two ANNs were created, trained, tested and validated for a complex Mediterranean river basin in Crete, Greece. As part of the second management component a statistical downscaling tool was used for the creation of meteorological data according to the higher and lower emission climate change scenarios A2 and B1. These data are used as input in the ANN for the forecasting of river flow for the next two decades. The final component is the application of a meteorological index on the measured and forecasted precipitation and flow data, in order to assess the severity and duration of extreme events. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fast flexible modeling of RNA structure using internal coordinates.
Flores, Samuel Coulbourn; Sherman, Michael A; Bruns, Christopher M; Eastman, Peter; Altman, Russ Biagio
2011-01-01
Modeling the structure and dynamics of large macromolecules remains a critical challenge. Molecular dynamics (MD) simulations are expensive because they model every atom independently, and are difficult to combine with experimentally derived knowledge. Assembly of molecules using fragments from libraries relies on the database of known structures and thus may not work for novel motifs. Coarse-grained modeling methods have yielded good results on large molecules but can suffer from difficulties in creating more detailed full atomic realizations. There is therefore a need for molecular modeling algorithms that remain chemically accurate and economical for large molecules, do not rely on fragment libraries, and can incorporate experimental information. RNABuilder works in the internal coordinate space of dihedral angles and thus has time requirements proportional to the number of moving parts rather than the number of atoms. It provides accurate physics-based response to applied forces, but also allows user-specified forces for incorporating experimental information. A particular strength of RNABuilder is that all Leontis-Westhof basepairs can be specified as primitives by the user to be satisfied during model construction. We apply RNABuilder to predict the structure of an RNA molecule with 160 bases from its secondary structure, as well as experimental information. Our model matches the known structure to 10.2 Angstroms RMSD and has low computational expense.
Building energy modeling for green architecture and intelligent dashboard applications
NASA Astrophysics Data System (ADS)
DeBlois, Justin
Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the representation of unpredictable occupancy patterns on model results. Combined, these studies inform modelers and researchers on frameworks for simulating holistically designed architecture and improving the interaction between models and building occupants, in residential and commercial settings. v
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
3D Boolean operations in virtual surgical planning.
Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun
2017-10-01
Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.
Facilitated sequence counting and assembly by template mutagenesis
Levy, Dan; Wigler, Michael
2014-01-01
Presently, inferring the long-range structure of the DNA templates is limited by short read lengths. Accurate template counts suffer from distortions occurring during PCR amplification. We explore the utility of introducing random mutations in identical or nearly identical templates to create distinguishable patterns that are inherited during subsequent copying. We simulate the applications of this process under assumptions of error-free sequencing and perfect mapping, using cytosine deamination as a model for mutation. The simulations demonstrate that within readily achievable conditions of nucleotide conversion and sequence coverage, we can accurately count the number of otherwise identical molecules as well as connect variants separated by long spans of identical sequence. We discuss many potential applications, such as transcript profiling, isoform assembly, haplotype phasing, and de novo genome assembly. PMID:25313059
Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.
Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L
2016-10-01
Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.
2015-12-01
USS Port Royal hit a coral reef in order to provide an independent review of the damage the ship sustained. Our classified report discussed...explosion. Underwater explosions create a shock wave and a highly compressed gas bubble that expands and contracts. This can cause a type of vertical or...conditions also remains unknown. Due to the dynamic nature of waves , the Navy cannot rely on modeling and simulation alone to provide an accurate
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
The NIH 3D Print Exchange: A Public Resource for Bioscientific and Biomedical 3D Prints.
Coakley, Meghan F; Hurt, Darrell E; Weber, Nick; Mtingwa, Makazi; Fincher, Erin C; Alekseyev, Vsevelod; Chen, David T; Yun, Alvin; Gizaw, Metasebia; Swan, Jeremy; Yoo, Terry S; Huyen, Yentram
2014-09-01
The National Institutes of Health (NIH) has launched the NIH 3D Print Exchange, an online portal for discovering and creating bioscientifically relevant 3D models suitable for 3D printing, to provide both researchers and educators with a trusted source to discover accurate and informative models. There are a number of online resources for 3D prints, but there is a paucity of scientific models, and the expertise required to generate and validate such models remains a barrier. The NIH 3D Print Exchange fills this gap by providing novel, web-based tools that empower users with the ability to create ready-to-print 3D files from molecular structure data, microscopy image stacks, and computed tomography scan data. The NIH 3D Print Exchange facilitates open data sharing in a community-driven environment, and also includes various interactive features, as well as information and tutorials on 3D modeling software. As the first government-sponsored website dedicated to 3D printing, the NIH 3D Print Exchange is an important step forward to bringing 3D printing to the mainstream for scientific research and education.
Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.
2016-01-01
Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096
Evaluation of Industry Standard Turbulence Models on an Axisymmetric Supersonic Compression Corner
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2015-01-01
Reynolds-averaged Navier-Stokes computations of a shock-wave/boundary-layer interaction (SWBLI) created by a Mach 2.85 flow over an axisymmetric 30-degree compression corner were carried out. The objectives were to evaluate four turbulence models commonly used in industry, for SWBLIs, and to evaluate the suitability of this test case for use in further turbulence model benchmarking. The Spalart-Allmaras model, Menter's Baseline and Shear Stress Transport models, and a low-Reynolds number k- model were evaluated. Results indicate that the models do not accurately predict the separation location; with the SST model predicting the separation onset too early and the other models predicting the onset too late. Overall the Spalart-Allmaras model did the best job in matching the experimental data. However there is significant room for improvement, most notably in the prediction of the turbulent shear stress. Density data showed that the simulations did not accurately predict the thermal boundary layer upstream of the SWBLI. The effect of turbulent Prandtl number and wall temperature were studied in an attempt to improve this prediction and understand their effects on the interaction. The data showed that both parameters can significantly affect the separation size and location, but did not improve the agreement with the experiment. This case proved challenging to compute and should provide a good test for future turbulence modeling work.
Parametric model of human body shape and ligaments for patient-specific epidural simulation.
Vaughan, Neil; Dubey, Venketesh N; Wee, Michael Y K; Isaacs, Richard
2014-10-01
This work is to build upon the concept of matching a person's weight, height and age to their overall body shape to create an adjustable three-dimensional model. A versatile and accurate predictor of body size and shape and ligament thickness is required to improve simulation for medical procedures. A model which is adjustable for any size, shape, body mass, age or height would provide ability to simulate procedures on patients of various body compositions. Three methods are provided for estimating body circumferences and ligament thicknesses for each patient. The first method is using empirical relations from body shape and size. The second method is to load a dataset from a magnetic resonance imaging (MRI) scan or ultrasound scan containing accurate ligament measurements. The third method is a developed artificial neural network (ANN) which uses MRI dataset as a training set and improves accuracy using error back-propagation, which learns to increase accuracy as more patient data is added. The ANN is trained and tested with clinical data from 23,088 patients. The ANN can predict subscapular skinfold thickness within 3.54 mm, waist circumference 3.92 cm, thigh circumference 2.00 cm, arm circumference 1.21 cm, calf circumference 1.40 cm, triceps skinfold thickness 3.43 mm. Alternative regression analysis method gave overall slightly less accurate predictions for subscapular skinfold thickness within 3.75 mm, waist circumference 3.84 cm, thigh circumference 2.16 cm, arm circumference 1.34 cm, calf circumference 1.46 cm, triceps skinfold thickness 3.89 mm. These calculations are used to display a 3D graphics model of the patient's body shape using OpenGL and adjusted by 3D mesh deformations. A patient-specific epidural simulator is presented using the developed body shape model, able to simulate needle insertion procedures on a 3D model of any patient size and shape. The developed ANN gave the most accurate results for body shape, size and ligament thickness. The resulting simulator offers the experience of simulating needle insertions accurately whilst allowing for variation in patient body mass, height or age. Copyright © 2014 Elsevier B.V. All rights reserved.
Height and Weight Estimation From Anthropometric Measurements Using Machine Learning Regressions
Fernandes, Bruno J. T.; Roque, Alexandre
2018-01-01
Height and weight are measurements explored to tracking nutritional diseases, energy expenditure, clinical conditions, drug dosages, and infusion rates. Many patients are not ambulant or may be unable to communicate, and a sequence of these factors may not allow accurate estimation or measurements; in those cases, it can be estimated approximately by anthropometric means. Different groups have proposed different linear or non-linear equations which coefficients are obtained by using single or multiple linear regressions. In this paper, we present a complete study of the application of different learning models to estimate height and weight from anthropometric measurements: support vector regression, Gaussian process, and artificial neural networks. The predicted values are significantly more accurate than that obtained with conventional linear regressions. In all the cases, the predictions are non-sensitive to ethnicity, and to gender, if more than two anthropometric parameters are analyzed. The learning model analysis creates new opportunities for anthropometric applications in industry, textile technology, security, and health care. PMID:29651366
Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...
2016-06-23
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Detection of nucleotide-specific CRISPR/Cas9 modified alleles using multiplex ligation detection
KC, R.; Srivastava, A.; Wilkowski, J. M.; Richter, C. E.; Shavit, J. A.; Burke, D. T.; Bielas, S. L.
2016-01-01
CRISPR/Cas9 genome-editing has emerged as a powerful tool to create mutant alleles in model organisms. However, the precision with which these mutations are created has introduced a new set of complications for genotyping and colony management. Traditional gene-targeting approaches in many experimental organisms incorporated exogenous DNA and/or allele specific sequence that allow for genotyping strategies based on binary readout of PCR product amplification and size selection. In contrast, alleles created by non-homologous end-joining (NHEJ) repair of double-stranded DNA breaks generated by Cas9 are much less amenable to such strategies. Here we describe a novel genotyping strategy that is cost effective, sequence specific and allows for accurate and efficient multiplexing of small insertion-deletions and single-nucleotide variants characteristic of CRISPR/Cas9 edited alleles. We show that ligation detection reaction (LDR) can be used to generate products that are sequence specific and uniquely detected by product size and/or fluorescent tags. The method works independently of the model organism and will be useful for colony management as mutant alleles differing by a few nucleotides become more prevalent in experimental animal colonies. PMID:27557703
Full body musculoskeletal model for muscle-driven simulation of human gait
Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.
2017-01-01
Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337
McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A
2012-07-01
TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.
Lower limb estimation from sparse landmarks using an articulated shape model.
Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F
2016-12-08
Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Early prediction of student goals and affect in narrative-centered learning environments
NASA Astrophysics Data System (ADS)
Lee, Sunyoung
Recent years have seen a growing recognition of the role of goal and affect recognition in intelligent tutoring systems. Goal recognition is the task of inferring users' goals from a sequence of observations of their actions. Because of the uncertainty inherent in every facet of human computer interaction, goal recognition is challenging, particularly in contexts in which users can perform many actions in any order, as is the case with intelligent tutoring systems. Affect recognition is the task of identifying the emotional state of a user from a variety of physical cues, which are produced in response to affective changes in the individual. Accurately recognizing student goals and affect states could contribute to more effective and motivating interactions in intelligent tutoring systems. By exploiting knowledge of student goals and affect states, intelligent tutoring systems can dynamically modify their behavior to better support individual students. To create effective interactions in intelligent tutoring systems, goal and affect recognition models should satisfy two key requirements. First, because incorrectly predicted goals and affect states could significantly diminish the effectiveness of interactive systems, goal and affect recognition models should provide accurate predictions of user goals and affect states. When observations of users' activities become available, recognizers should make accurate early" predictions. Second, goal and affect recognition models should be highly efficient so they can operate in real time. To address key issues, we present an inductive approach to recognizing student goals and affect states in intelligent tutoring systems by learning goals and affect recognition models. Our work focuses on goal and affect recognition in an important new class of intelligent tutoring systems, narrative-centered learning environments. We report the results of empirical studies of induced recognition models from observations of students' interactions in narrative-centered learning environments. Experimental results suggest that induced models can make accurate early predictions of student goals and affect states, and they are sufficiently efficient to meet the real-time performance requirements of interactive learning environments.
NASA Astrophysics Data System (ADS)
Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.
2016-12-01
It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
NASA Astrophysics Data System (ADS)
Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas
2017-03-01
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.
Predicting long-term graft survival in adult kidney transplant recipients.
Pinsky, Brett W; Lentine, Krista L; Ercole, Patrick R; Salvalaggio, Paolo R; Burroughs, Thomas E; Schnitzler, Mark A
2012-07-01
The ability to accurately predict a population's long-term survival has important implications for quantifying the benefits of transplantation. To identify a model that can accurately predict a kidney transplant population's long-term graft survival, we retrospectively studied the United Network of Organ Sharing data from 13,111 kidney-only transplants completed in 1988- 1989. Nineteen-year death-censored graft survival (DCGS) projections were calculated and compared with the population's actual graft survival. The projection curves were created using a two-part estimation model that (1) fits a Kaplan-Meier survival curve immediately after transplant (Part A) and (2) uses truncated observational data to model a survival function for long-term projection (Part B). Projection curves were examined using varying amounts of time to fit both parts of the model. The accuracy of the projection curve was determined by examining whether predicted survival fell within the 95% confidence interval for the 19-year Kaplan-Meier survival, and the sample size needed to detect the difference in projected versus observed survival in a clinical trial. The 19-year DCGS was 40.7% (39.8-41.6%). Excellent predictability (41.3%) can be achieved when Part A is fit for three years and Part B is projected using two additional years of data. Using less than five total years of data tended to overestimate the population's long-term survival, accurate prediction of long-term DCGS is possible, but requires attention to the quantity data used in the projection method.
Ueno, Yutaka; Ito, Shuntaro; Konagaya, Akihiko
2014-12-01
To better understand the behaviors and structural dynamics of proteins within a cell, novel software tools are being developed that can create molecular animations based on the findings of structural biology. This study proposes our method developed based on our prototypes to detect collisions and examine the soft-body dynamics of molecular models. The code was implemented with a software development toolkit for rigid-body dynamics simulation and a three-dimensional graphics library. The essential functions of the target software system included the basic molecular modeling environment, collision detection in the molecular models, and physical simulations of the movement of the model. Taking advantage of recent software technologies such as physics simulation modules and interpreted scripting language, the functions required for accurate and meaningful molecular animation were implemented efficiently.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption
NASA Astrophysics Data System (ADS)
Leake, J. E.; Linton, M.; Schuck, P. W.
2017-12-01
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.
Physics-based analysis and control of human snoring
NASA Astrophysics Data System (ADS)
Sanchez, Yaselly; Wang, Junshi; Han, Pan; Xi, Jinxiang; Dong, Haibo
2017-11-01
In order to advance the understanding of biological fluid dynamics and its effects on the acoustics of human snoring, the study pursued a physics-based computational approach. From human magnetic resonance image (MRI) scans, the researchers were able to develop both anatomically and dynamically accurate airway-uvula models. With airways defined as rigid, and the uvula defined as flexible, computational models were created with various pharynx thickness and geometries. In order to determine vortex shedding with prescribed uvula movement, the uvula fluctuation was categorized by its specific parameters: magnitude, frequency, and phase lag. Uvula vibration modes were based on one oscillation, or one harmonic frequency, and pressure probes were located in seven different positions throughout the airway-uvula model. By taking fast Fourier transforms (FFT) from the pressure probe data, it was seen that four harmonics were created throughout the simulation within one oscillation of uvula movement. Of the four harmonics, there were two pressure probes which maintained high amplitudes and led the researcher to believe that different vortices formed with different snoring frequencies. This work is supported by the NSF Grant CBET-1605434.
Korhonen, L E; Turpeinen, M; Rahnasto, M; Wittekindt, C; Poso, A; Pelkonen, O; Raunio, H; Juvonen, R O
2007-01-01
Background and purpose: The cytochrome P450 2B6 (CYP2B6) enzyme metabolises a number of clinically important drugs. Drug-drug interactions resulting from inhibition or induction of CYP2B6 activity may cause serious adverse effects. The aims of this study were to construct a three-dimensional structure-activity relationship (3D-QSAR) model of the CYP2B6 protein and to identify novel potent and selective inhibitors of CYP2B6 for in vitro research purposes. Experimental approach: The inhibition potencies (IC50 values) of structurally diverse chemicals were determined with recombinant human CYP2B6 enzyme. Two successive models were constructed using Comparative Molecular Field Analysis (CoMFA). Key results: Three compounds proved to be very potent and selective competitive inhibitors of CYP2B6 in vitro (IC50<1 μM): 4-(4-chlorobenzyl)pyridine (CBP), 4-(4-nitrobenzyl)pyridine (NBP), and 4-benzylpyridine (BP). A complete inhibition of CYP2B6 activity was achieved with 0.1 μM CBP, whereas other CYP-related activities were not affected. Forty-one compounds were selected for further testing and construction of the final CoMFA model. The created CoMFA model was of high quality and predicted accurately the inhibition potency of a test set (n=7) of structurally diverse compounds. Conclusions and implications: Two CoMFA models were created which revealed the key molecular characteristics of inhibitors of the CYP2B6 enzyme. The final model accurately predicted the inhibitory potencies of several structurally unrelated compounds. CBP, BP and NBP were identified as novel potent and selective inhibitors of CYP2B6 and CBP especially is a suitable inhibitor for in vitro screening studies. PMID:17325652
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papadimitroulas, P; Kagadis, GC; Loudos, G
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniquesmore » were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and evaluating the results with clinical estimated doses.« less
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1999-01-01
The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made in the analysis were addressed and fully investigated for their accuracy by using the three-dimensional electromagnetic simulation code MAFIA (Solution of Maxwell's Equations by the Finite Integration Algorithm) (refs. 3 and 4). We found that several approximations introduced significant error (ref. 5).
Ethical issues when modelling brain disorders innon-human primates.
Neuhaus, Carolyn P
2018-05-01
Non-human animal models of human diseases advance our knowledge of the genetic underpinnings of disease and lead to the development of novel therapies for humans. While mice are the most common model organisms, their usefulness is limited. Larger animals may provide more accurate and valuable disease models, but it has, until recently, been challenging to create large animal disease models. Genome editors, such as Clustered Randomised Interspersed Palindromic Repeat (CRISPR), meet some of these challenges and bring routine genome engineering of larger animals and non-human primates (NHPs) well within reach. There is growing interest in creating NHP models of brain disorders such as autism, depression and Alzheimer's, which are very difficult to model or study in other organisms, including humans. New treatments are desperately needed for this set of disorders. This paper is novel in asking: Insofar as NHPs are being considered for use as model organisms for brain disorders, can this be done ethically? The paper concludes that it cannot. Notwithstanding ongoing debate about NHPs' moral status, (1) animal welfare concerns, (2) the availability of alternative methods of studying brain disorders and (3) unmet expectations of benefit justify a stop on the creation of NHP model organisms to study brain disorders. The lure of using new genetic technologies combined with the promise of novel therapeutics presents a formidable challenge to those who call for slow, careful, and only necessary research involving NHPs. But researchers should not create macaques with social deficits or capuchin monkeys with memory deficits just because they can. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).
Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2016-08-01
Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.
Is there hope for multi-site complexation modeling?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.
2006-06-06
It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained priormore » to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.« less
Argueta, Edwin; Shaji, Jeena; Gopalan, Arun; Liao, Peilin; Snurr, Randall Q; Gómez-Gualdrón, Diego A
2018-01-09
Metal-organic frameworks (MOFs) are porous crystalline materials with attractive properties for gas separation and storage. Their remarkable tunability makes it possible to create millions of MOF variations but creates the need for fast material screening to identify promising structures. Computational high-throughput screening (HTS) is a possible solution, but its usefulness is tied to accurate predictions of MOF adsorption properties. Accurate adsorption simulations often require an accurate description of electrostatic interactions, which depend on the electronic charges of the MOF atoms. HTS-compatible methods to assign charges to MOF atoms need to accurately reproduce electrostatic potentials (ESPs) and be computationally affordable, but current methods present an unsatisfactory trade-off between computational cost and accuracy. We illustrate a method to assign charges to MOF atoms based on ab initio calculations on MOF molecular building blocks. A library of building blocks with built-in charges is thus created and used by an automated MOF construction code to create hundreds of MOFs with charges "inherited" from the constituent building blocks. The molecular building block-based (MBBB) charges are similar to REPEAT charges-which are charges that reproduce ESPs obtained from ab initio calculations on crystallographic unit cells of nanoporous crystals-and thus similar predictions of adsorption loadings, heats of adsorption, and Henry's constants are obtained with either method. The presented results indicate that the MBBB method to assign charges to MOF atoms is suitable for use in computational high-throughput screening of MOFs for applications that involve adsorption of molecules such as carbon dioxide.
Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.
Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J
2016-11-01
Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features. © The Author(s) 2016.
Survey of business process management: challenges and solutions
NASA Astrophysics Data System (ADS)
Alotaibi, Youseef; Liu, Fei
2017-09-01
The current literature shows that creating a good framework on business process model (PM) is not an easy task. A successful business PM should have the ability to ensure accurate alignment between business processes (BPs) and information technology (IT) designs, provide security protection, manage the rapidly changing business environment and BPs, manage customer power, be flexible for reengineering and ensure that IT goals can be easily derived from business goals and hence an information system (IS) can be easily implemented. This article presents an overview of research in the business PM domain. We have presented a review of the challenges facing business PMs, such as misalignment between business and IT, difficulty of deriving IT goals from business goals, creating secured business PM, reengineering BPs, managing the rapidly changing BP and business environment and managing customer power. Also, it presents the limitations of existing business PM frameworks. Finally, we outline several guidelines to create good business PM and the possible further research directions in the business PM domain.
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
Creating three-dimensional tooth models from tomographic images.
Lima da Silva, Isaac Newton; Barbosa, Gustavo Frainer; Soares, Rodrigo Borowski Grecco; Beltrao, Maria Cecilia Gomes; Spohr, Ana Maria; Mota, Eduardo Golcalves; Oshima, Hugo Mitsuo Silva; Burnett, Luiz Henrique
2008-01-01
The use of Finite Element Analysis (FEA) is becoming very frequent in Dentistry. However, most of the three-dimensional models presented by the literature for teeth are limited in terms of geometry. Discrepancy in shape and dimensions can cause wrong results to occur. Sharp cusps and faceted contour can produce stress concentrations, which are incoherent with the reality. The aim of this study was the processing of tomographic images in order to develop an advanced three-dimensional reconstruction of the anatomy of a molar tooth and the integration of the resulting solid with commercially available CAD/CAE software. Computed tomographic images were obtained from 0.5 mm thick slices of mandibular molar and transferred to commercial cad software. Once the point cloud data have been generated, the work on these points started to get to the solid model of the tooth with Pro/Engineer software. The obtained tooth model showed very accurate shape and dimensions, as it was obtained from real tooth data with error of 0.0 to -0.8 mm. The methodology presented was efficient for creating a biomodel of a tooth from tomographic images that realistically represented its anatomy.
Surgeon-Based 3D Printing for Microvascular Bone Flaps.
Taylor, Erin M; Iorio, Matthew L
2017-07-01
Background Three-dimensional (3D) printing has developed as a revolutionary technology with the capacity to design accurate physical models in preoperative planning. We present our experience in surgeon-based design of 3D models, using home 3D software and printing technology for use as an adjunct in vascularized bone transfer. Methods Home 3D printing techniques were used in the design and execution of vascularized bone flap transfers to the upper extremity. Open source imaging software was used to convert preoperative computed tomography scans and create 3D models. These were printed in the surgeon's office as 3D models for the planned reconstruction. Vascularized bone flaps were designed intraoperatively based on the 3D printed models. Results Three-dimensional models were created for intraoperative use in vascularized bone flaps, including (1) medial femoral trochlea (MFT) flap for scaphoid avascular necrosis and nonunion, (2) MFT flap for lunate avascular necrosis and nonunion, (3) medial femoral condyle (MFC) flap for wrist arthrodesis, and (4) free fibula osteocutaneous flap for distal radius septic nonunion. Templates based on the 3D models allowed for the precise and rapid contouring of well-vascularized bone flaps in situ, prior to ligating the donor pedicle. Conclusions Surgeon-based 3D printing is a feasible, innovative technology that allows for the precise and rapid contouring of models that can be created in various configurations for pre- and intraoperative planning. The technology is easy to use, convenient, and highly economical as compared with traditional send-out manufacturing. Surgeon-based 3D printing is a useful adjunct in vascularized bone transfer. Level of Evidence Level IV. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
12 CFR 1273.9 - Audit Committee.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the accurate and meaningful combination of information submitted by the Banks in the Bank System's... prevention or detection of management override or compromise of the internal control system; and (ii... information submitted by the Banks to the OF to be combined to create accurate and meaningful combined...
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.
2012-01-01
Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225
Fusing visual and behavioral cues for modeling user experience in games.
Shaker, Noor; Asteriadis, Stylianos; Yannakakis, Georgios N; Karpouzis, Kostas
2013-12-01
Estimating affective and cognitive states in conditions of rich human-computer interaction, such as in games, is a field of growing academic and commercial interest. Entertainment and serious games can benefit from recent advances in the field as, having access to predictors of the current state of the player (or learner) can provide useful information for feeding adaptation mechanisms that aim to maximize engagement or learning effects. In this paper, we introduce a large data corpus derived from 58 participants that play the popular Super Mario Bros platform game and attempt to create accurate models of player experience for this game genre. Within the view of the current research, features extracted both from player gameplay behavior and game levels, and player visual characteristics have been used as potential indicators of reported affect expressed as pairwise preferences between different game sessions. Using neuroevolutionary preference learning and automatic feature selection, highly accurate models of reported engagement, frustration, and challenge are constructed (model accuracies reach 91%, 92%, and 88% for engagement, frustration, and challenge, respectively). As a step further, the derived player experience models can be used to personalize the game level to desired levels of engagement, frustration, and challenge as game content is mapped to player experience through the behavioral and expressivity patterns of each player.
Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D
2013-01-01
Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.
Fluid Flow Technology that Measures Up
NASA Technical Reports Server (NTRS)
2004-01-01
From 1994 to 1996, NASA s Marshall Space Flight Center conducted a Center Director's Discretionary Fund research effort to apply artificial intelligence technologies to the health management of plant equipment and space propulsion systems. Through this effort, NASA established a business relationship with Quality Monitoring and Control (QMC), of Kingwood, Texas, to provide hardware modeling and artificial intelligence tools. Very detailed and accurate Space Shuttle Main Engine (SSME) analysis and algorithms were jointly created, which identified several missing, critical instrumentation needs for adequately evaluating the engine health status. One of the missing instruments was a liquid oxygen (LOX) flow measurement. This instrument was missing since the original SSME included a LOX turbine flow meter that failed during a ground test, resulting in considerable damage for NASA. New balanced flow meter technology addresses this need with robust, safe, and accurate flow metering hardware.
Analysis and optimization of the active rigidity joint
NASA Astrophysics Data System (ADS)
Manzo, Justin; Garcia, Ephrahim
2009-12-01
The active rigidity joint is a composite mechanism using shape memory alloy and shape memory polymer to create a passively rigid joint with thermally activated deflection. A new model for the active rigidity joint relaxes constraints of earlier methods and allows for more accurate deflection predictions compared to finite element results. Using an iterative process to determine the strain distribution and deflection, the method demonstrates accurate results for both surface bonded and embedded actuators with and without external loading. Deflection capabilities are explored through simulated annealing heuristic optimization using a variety of cost functions to explore actuator performance. A family of responses presents actuator characteristics in terms of load bearing and deflection capabilities given material and thermal constraints. Optimization greatly expands the available workspace of the active rigidity joint from the initial configuration, demonstrating specific work capabilities comparable to those of muscle tissue.
Nonlinear modeling of chaotic time series: Theory and applications
NASA Astrophysics Data System (ADS)
Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.
We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.
ERIC Educational Resources Information Center
Cheney, Carol
1995-01-01
For colleges and schools concerned with the image they project to the public, guidelines for creating and presenting a graphic identity, particularly for fund raising, are offered. Design principles include creating a unified communications program, accurately reflecting the institution's mission and spirit, inspiring confidence in workers and…
78 FR 60861 - Native American Tribal Insignia Database
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... Database ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark... the report was that the USPTO create and maintain an accurate and comprehensive database containing... this recommendation, the Senate Committee on Appropriations directed the USPTO to create this database...
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Rowe, T. Scott; Fuller, Alfred R.; Hamann, Bernd; Werner, John S.
2010-02-01
An accurate solid eye model (with volumetric retinal morphology) has many applications in the field of ophthalmology, including evaluation of ophthalmic instruments and optometry/ophthalmology training. We present a method that uses volumetric OCT retinal data sets to produce an anatomically correct representation of three-dimensional (3D) retinal layers. This information is exported to a laser scan system to re-create it within solid eye retinal morphology of the eye used in OCT testing. The solid optical model eye is constructed from PMMA acrylic, with equivalent optical power to that of the human eye (~58D). Additionally we tested a water bath eye model from Eyetech Ltd. with a customized retina consisting of five layers of ~60 μm thick biaxial polypropylene film and hot melt rubber adhesive.
3D Digitization and Prototyping of the Skull for Practical Use in the Teaching of Human Anatomy.
Lozano, Maria Teresa Ugidos; Haro, Fernando Blaya; Diaz, Carlos Molino; Manzoor, Sadia; Ugidos, Gonzalo Ferrer; Mendez, Juan Antonio Juanes
2017-05-01
The creation of new rapid prototyping techniques, low cost 3D printers as well as the creation of new software for these techniques have allowed the creation of 3D models of bones making their application possible in the field of teaching anatomy in the faculties of Health Sciences. The 3D model of cranium created in the present work, at full scale, present accurate reliefs and anatomical details that are easily identifiable by undergraduate students in their use for the study of human anatomy. In this article, the process of scanning the skull and the subsequent treatment of these images with specific software until the generation of 3D model using 3D printer has been reported.
The Puzzling Unidimensionality of DSM-5 Substance Use Disorder Diagnoses
MacCoun, Robert J.
2013-01-01
There is a perennial expert debate about the criteria to be included or excluded for the DSM diagnoses of substance use dependence. Yet analysts routinely report evidence for the unidimensionality of the resulting checklist. If in fact the checklist is unidimensional, the experts are wrong that the criteria are distinct, so either the experts are mistaken or the reported unidimensionality is spurious. I argue for the latter position, and suggest that the traditional reflexive measurement model is inappropriate for the DSM; a formative measurement model would be a more accurate characterization of the institutional process by which the checklist is created, and a network or causal model would be a more appropriate foundation for a scientifically grounded diagnostic system. PMID:24324446
Subsonic Wing Optimization for Handling Qualities Using ACSYNT
NASA Technical Reports Server (NTRS)
Soban, Danielle Suzanne
1996-01-01
The capability to accurately and rapidly predict aircraft stability derivatives using one comprehensive analysis tool has been created. The PREDAVOR tool has the following capabilities: rapid estimation of stability derivatives using a vortex lattice method, calculation of a longitudinal handling qualities metric, and inherent methodology to optimize a given aircraft configuration for longitudinal handling qualities, including an intuitive graphical interface. The PREDAVOR tool may be applied to both subsonic and supersonic designs, as well as conventional and unconventional, symmetric and asymmetric configurations. The workstation-based tool uses as its model a three-dimensional model of the configuration generated using a computer aided design (CAD) package. The PREDAVOR tool was applied to a Lear Jet Model 23 and the North American XB-70 Valkyrie.
Effects of including electrojet turbulence in LFM-RCM simulations of geospace storms
NASA Astrophysics Data System (ADS)
Oppenheim, M. M.; Wiltberger, M. J.; Merkin, V. G.; Zhang, B.; Toffoletto, F.; Wang, W.; Lyon, J.; Liu, J.; Dimant, Y. S.
2016-12-01
Global geospace system simulations need to incorporate nonlinear and small-scale physical processes in order to accurately model storms and other intense events. During times of strong magnetospheric disturbances, large-amplitude electric fields penetrate from the Earth's magnetosphere to the E-region ionosphere where they drive Farley-Buneman instabilities (FBI) that create small-scale plasma density turbulence. This induces nonlinear currents and leads to anomalous electron heating. Current global Magnetosphere-Ionosphere-Thermosphere (MIT) models disregard these effects by assuming simple laminar ionospheric currents. This paper discusses the effects of incorporating accurate turbulent conductivities into MIT models. Recently, we showed in Liu et al. (2016) that during storm-time, turbulence increases the electron temperatures and conductivities more than precipitation. In this talk, we present the effect of adding these effects to the combined Lyon-Fedder-Mobarry (LFM) global MHD magnetosphere simulator and the Rice Convection Model (RCM). The LFM combines a magnetohydrodynamic (MHD) simulation of the magnetosphere with a 2D electrostatic solution of the ionosphere. The RCM uses drift physics to accurately model the inner magnetosphere, including a storm enhanced ring current. The LFM and coupled LFM-RCM simulations have previously shown unrealistically high cross-polar-cap potentials during strong solar wind driving conditions. We have recently implemented an LFM module that modifies the ionospheric conductivity to account for FBI driven anomalous electron heating and non-linear cross-field current enhancements as a function of the predicted ionospheric electric field. We have also improved the LFM-RCM code by making it capable of handling dipole tilts and asymmetric ionospheric solutions. We have tested this new LFM version by simulating the March 17, 2013 geomagnetic storm. These simulations showed a significant reduction in the cross-polar-cap potential during the strongest driving conditions, significant increases in the ionospheric conductivity in the auroral oval, and better agreement with DMSP observations of sub-auroral polarization streams. We conclude that accurate MIT simulations of geospace storms require the inclusion of turbulent conductivities.
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Creation of Consistent Burn Wounds: A Rat Model
Cai, Elijah Zhengyang; Ang, Chuan Han; Raju, Ashvin; Tan, Kong Bing; Hing, Eileen Chor Hoong; Loo, Yihua; Wong, Yong Chiat; Lee, Hanjing; Lim, Jane; Moochhala, Shabbir M; Hauser, Charlotte AE
2014-01-01
Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter) was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845) (n=30). Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424), 2.35 mm (SD 0.071), and 2.60 mm (SD 0.283) for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model. PMID:25075351
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Modeling reacting gases and aftertreatment devices for internal combustion engines
NASA Astrophysics Data System (ADS)
Depcik, Christopher David
As more emphasis is placed worldwide on reducing greenhouse gas emissions, automobile manufacturers have to create more efficient engines. Simultaneously, legislative agencies want these engines to produce fewer problematic emissions such as nitrogen oxides and particulate matter. In response, newer combustion methods, like homogeneous charge compression ignition and fuel cells, are being researched alongside the old standard of efficiency, the compression ignition or diesel engine. These newer technologies present a number of benefits but still have significant challenges to overcome. As a result, renewed interest has risen in making diesel engines cleaner. The key to cleaning up the diesel engine is the placement of aftertreatment devices in the exhaust. These devices have shown great potential in reducing emission levels below regulatory levels while still allowing for increased fuel economy versus a gasoline engine. However, these devices are subject to many flow control issues. While experimental evaluation of these devices helps to understand these issues better, it is impossible to solve the problem through experimentation alone because of time and cost constraints. Because of this, accurate models are needed in conjunction with the experimental work. In this dissertation, the author examines the entire exhaust system including reacting gas dynamics and aftertreatment devices, and develops a complete numerical model for it. The author begins by analyzing the current one-dimensional gas-dynamics simulation models used for internal combustion engine simulations. It appears that more accurate and faster numerical method is available, in particular, those developed in aeronautical engineering, and the author successfully implements one for the exhaust system. The author then develops a comprehensive literature search to better understand the aftertreatment devices. A number of these devices require a secondary injection of fuel or reductant in the exhaust stream. Accordingly, the author develops a simple post-cylinder injection model which can be easily tuned to match experimental findings. In addition, the author creates a general catalyst model which can be used to model virtually all of the different aftertreatment devices. Extensive validation of this model with experimental data is presented along with all of the numerical algorithms needed to reproduce the model.
Predicting age groups of Twitter users based on language and metadata features
Morgan-Lopez, Antonio A.; Chew, Robert F.; Ruddle, Paul
2017-01-01
Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles’ metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen’s d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as “school” for youth and “college” for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be helpful for informing public health surveillance and evaluation research. PMID:28850620
Predicting age groups of Twitter users based on language and metadata features.
Morgan-Lopez, Antonio A; Kim, Annice E; Chew, Robert F; Ruddle, Paul
2017-01-01
Health organizations are increasingly using social media, such as Twitter, to disseminate health messages to target audiences. Determining the extent to which the target audience (e.g., age groups) was reached is critical to evaluating the impact of social media education campaigns. The main objective of this study was to examine the separate and joint predictive validity of linguistic and metadata features in predicting the age of Twitter users. We created a labeled dataset of Twitter users across different age groups (youth, young adults, adults) by collecting publicly available birthday announcement tweets using the Twitter Search application programming interface. We manually reviewed results and, for each age-labeled handle, collected the 200 most recent publicly available tweets and user handles' metadata. The labeled data were split into training and test datasets. We created separate models to examine the predictive validity of language features only, metadata features only, language and metadata features, and words/phrases from another age-validated dataset. We estimated accuracy, precision, recall, and F1 metrics for each model. An L1-regularized logistic regression model was conducted for each age group, and predicted probabilities between the training and test sets were compared for each age group. Cohen's d effect sizes were calculated to examine the relative importance of significant features. Models containing both Tweet language features and metadata features performed the best (74% precision, 74% recall, 74% F1) while the model containing only Twitter metadata features were least accurate (58% precision, 60% recall, and 57% F1 score). Top predictive features included use of terms such as "school" for youth and "college" for young adults. Overall, it was more challenging to predict older adults accurately. These results suggest that examining linguistic and Twitter metadata features to predict youth and young adult Twitter users may be helpful for informing public health surveillance and evaluation research.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Vecsei, Bálint; Joós-Kovács, Gellért; Borbély, Judit; Hermann, Péter
2017-04-01
To compare the accuracy (trueness, precision) of direct and indirect scanning CAD/CAM methods. A master cast with prepared abutments and edentulous parts was created from polymethyl methacrylate (PMMA). A high-resolution industrial scanner was used to create a reference model. Polyvinyl-siloxane (PVS) impressions and digital impressions with three intraoral scanners (iTero, Cerec, Trios) were made (n=10 for each) from the PMMA model. A laboratory scanner (Scan CS2) was used to digitize the sectioned cast made from the PVS impressions. The stereolithographic (STL) files of the impressions (n=40) were exported. Each file was compared to the reference using Geomagic Verify software. Six points were assigned to enable virtual calliper measurement of three distances of varying size within the arch. Methods were compared using interquartile range regression and equality-of-variance tests for precision, and mixed-effects linear regression for trueness. The mean (SD) deviation of short distance measurements from the reference value was -40.3 (79.7) μm using the indirect, and 22.3 (40.0) μm using the direct method. For the medium distance, indirect measurements deviated by 5.2 (SD: 111.3) μm, and direct measurements by 115.8 (SD: 50.7) μm, on average; for the long distance, the corresponding estimates were -325.8 (SD: 134.1) μm with the indirect, and -163.5 (SD: 145.5) μm with the direct method. Significant differences were found between the two methods (p<0.05). With both methods, the shorter the distance, the more accurate results were achieved. Virtual models obtained by digital impressions can be more accurate than their conventional counterparts. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Propellant Chemistry for CFD Applications
NASA Technical Reports Server (NTRS)
Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.
1996-01-01
Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.
NASA Astrophysics Data System (ADS)
Hong, Yoon-Seok; Rosen, Michael R.
2002-03-01
An urban fractured-rock aquifer system, where disposal of storm water is via 'soak holes' drilled directly into the top of fractured-rock basalt, has a highly dynamic nature where theories or knowledge to generate the model are still incomplete and insufficient. Therefore, formulating an accurate mechanistic model, usually based on first principles (physical and chemical laws, mass balance, and diffusion and transport, etc.), requires time- and money-consuming tasks. Instead of a human developing the mechanistic-based model, this paper presents an approach to automatic model evolution in genetic programming (GP) to model dynamic behaviour of groundwater level fluctuations affected by storm water infiltration. This GP evolves mathematical models automatically that have an understandable structure using function tree representation by methods of natural selection ('survival of the fittest') through genetic operators (reproduction, crossover, and mutation). The simulation results have shown that GP is not only capable of predicting the groundwater level fluctuation due to storm water infiltration but also provides insight into the dynamic behaviour of a partially known urban fractured-rock aquifer system by allowing knowledge extraction of the evolved models. Our results show that GP can work as a cost-effective modelling tool, enabling us to create prototype models quickly and inexpensively and assists us in developing accurate models in less time, even if we have limited experience and incomplete knowledge for an urban fractured-rock aquifer system affected by storm water infiltration.
NASA Astrophysics Data System (ADS)
Teodor, F.; Marinescu, V.; Epureanu, A.
2016-11-01
Modeling of reconfigurable manufacturing systems would have done using existing Petri net types, but the complexity and dynamics of the new manufacturing system, mainly data reconfiguration feature, required looking for a more compact representation with many variables that to model as accurately not only the normal operation of the production system but can capture and model and reconfiguration process. Thus, it was necessary to create a new class of Petri nets, called RPD3D (Developed Petri nets with three dimensional) showing the name of both lineage (new class derived from Petri nets developed, created in 2000 by Prof. Dr. Ing Vasile Marinescu in his doctoral thesis) [1], but the most important of the new features defining (transformation from one 2D model into a 3D model).The idea was to introduce the classical model of a Petri third dimension to be able to overlay multiple levels (layers) formed in 2D or 3D Petri nets that interact with each other (receiving or giving commands to enable or disable the various modules together simulating the operation of reconfigurable manufacturing systems). The aim is to present a new type of Petri nets called RPD3D - Developed Petri three-dimensional model used for optimal control and simulation of reconfigurable manufacturing systems manufacture of products such systems.
Laubinger, Sascha; Zeller, Georg; Henz, Stefan R; Sachsenberg, Timo; Widmer, Christian K; Naouar, Naïra; Vuylsteke, Marnik; Schölkopf, Bernhard; Rätsch, Gunnar; Weigel, Detlef
2008-01-01
Gene expression maps for model organisms, including Arabidopsis thaliana, have typically been created using gene-centric expression arrays. Here, we describe a comprehensive expression atlas, Arabidopsis thaliana Tiling Array Express (At-TAX), which is based on whole-genome tiling arrays. We demonstrate that tiling arrays are accurate tools for gene expression analysis and identified more than 1,000 unannotated transcribed regions. Visualizations of gene expression estimates, transcribed regions, and tiling probe measurements are accessible online at the At-TAX homepage. PMID:18613972
Nagle, D.D.; Campbell, B.G.; Lowery, M.A.
2009-01-01
The increasing use and importance of lakes for water supply to communities enhance the need for an accurate methodology to determine lake bathymetry and storage capacity. A global positioning receiver and a fathometer were used to collect position data and water depth in February 2008 at Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and stage-area and -volume relations were created from the geographic information database.
Restoration and reconstruction from overlapping images
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Kaiser, Daniel J.; Hanson, Andrew L.; Li, Jing
1997-01-01
This paper describes a technique for restoring and reconstructing a scene from overlapping images. In situations where there are multiple, overlapping images of the same scene, it may be desirable to create a single image that most closely approximates the scene, based on all of the data in the available images. For example, successive swaths acquired by NASA's planned Moderate Imaging Spectrometer (MODIS) will overlap, particularly at wide scan angles, creating a severe visual artifact in the output image. Resampling the overlapping swaths to produce a more accurate image on a uniform grid requires restoration and reconstruction. The one-pass restoration and reconstruction technique developed in this paper yields mean-square-optimal resampling, based on a comprehensive end-to-end system model that accounts for image overlap, and subject to user-defined and data-availability constraints on the spatial support of the filter.
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
Three-Dimensional Modeling of Weed Plants Using Low-Cost Photogrammetry
Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José
2018-01-01
Sensing advances in plant phenotyping are of vital importance in basic and applied plant research. Plant phenotyping enables the modeling of complex shapes, which is useful, for example, in decision-making for agronomic management. In this sense, 3D processing algorithms for plant modeling is expanding rapidly with the emergence of new sensors and techniques designed to morphologically characterize. However, there are still some technical aspects to be improved, such as an accurate reconstruction of end-details. This study adapted low-cost techniques, Structure from Motion (SfM) and MultiView Stereo (MVS), to create 3D models for reconstructing plants of three weed species with contrasting shape and plant structures. Plant reconstruction was developed by applying SfM algorithms to an input set of digital images acquired sequentially following a track that was concentric and equidistant with respect to the plant axis and using three different angles, from a perpendicular to top view, which guaranteed the necessary overlap between images to obtain high precision 3D models. With this information, a dense point cloud was created using MVS, from which a 3D polygon mesh representing every plants’ shape and geometry was generated. These 3D models were validated with ground truth values (e.g., plant height, leaf area (LA) and plant dry biomass) using regression methods. The results showed, in general, a good consistency in the correlation equations between the estimated values in the models and the actual values measured in the weed plants. Indeed, 3D modeling using SfM algorithms proved to be a valuable methodology for weed phenotyping, since it accurately estimated the actual values of plant height and LA. Additionally, image processing using the SfM method was relatively fast. Consequently, our results indicate the potential of this budget system for plant reconstruction at high detail, which may be usable in several scenarios, including outdoor conditions. Future research should address other issues, such as the time-cost relationship and the need for detail in the different approaches. PMID:29614039
Maharlou, Hamidreza; Niakan Kalhori, Sharareh R; Shahbazi, Shahrbanoo; Ravangard, Ramin
2018-04-01
Accurate prediction of patients' length of stay is highly important. This study compared the performance of artificial neural network and adaptive neuro-fuzzy system algorithms to predict patients' length of stay in intensive care units (ICU) after cardiac surgery. A cross-sectional, analytical, and applied study was conducted. The required data were collected from 311 cardiac patients admitted to intensive care units after surgery at three hospitals of Shiraz, Iran, through a non-random convenience sampling method during the second quarter of 2016. Following the initial processing of influential factors, models were created and evaluated. The results showed that the adaptive neuro-fuzzy algorithm (with mean squared error [MSE] = 7 and R = 0.88) resulted in the creation of a more precise model than the artificial neural network (with MSE = 21 and R = 0.60). The adaptive neuro-fuzzy algorithm produces a more accurate model as it applies both the capabilities of a neural network architecture and experts' knowledge as a hybrid algorithm. It identifies nonlinear components, yielding remarkable results for prediction the length of stay, which is a useful calculation output to support ICU management, enabling higher quality of administration and cost reduction.
Kim, Jong Bae; Brienza, David M
2006-01-01
A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.
Cognitive Change Questionnaire as a method for cognitive impairment screening
Damin, Antonio Eduardo; Nitrini, Ricardo; Brucki, Sonia Maria Dozzi
2015-01-01
The Cognitive Change Questionnaire (CCQ) was created as an effective measure of cognitive change that is easy to use and suitable for application in Brazil. Objective To evaluate whether the CCQ can accurately distinguish normal subjects from individuals with Mild Cognitive Impairment (MCI) and/or early stage dementia and to develop a briefer questionnaire, based on the original 22-item CCQ (CCQ22), that contains fewer questions. Methods A total of 123 individuals were evaluated: 42 healthy controls, 40 patients with MCI and 41 with mild dementia. The evaluation was performed using cognitive tests based on individual performance and on questionnaires administered to informants. The CCQ22 was created based on a selection of questions that experts deemed useful in screening for early stage dementia. Results The CCQ22 showed good accuracy for distinguishing between the groups. Statistical models selected the eight questions with the greatest power to discriminate between the groups. The AUC ROC corresponding to the final version of the 8-item CCQ (CCQ8), demonstrated good accuracy in differentiating between groups, good correlation with the final diagnosis (r=0.861) and adequate internal consistency (Cronbach's α=0.876). Conclusion The CCQ8 can be used to accurately differentiate between normal subjects and individuals with cognitive impairment, constituting a brief and appropriate instrument for cognitive screening. PMID:29213967
Intuitive web-based experimental design for high-throughput biomedical data.
Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven
2015-01-01
Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.
Hansen, J V; Nelson, R D
1997-01-01
Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.
NASA Astrophysics Data System (ADS)
Graham, N. M.
2015-12-01
The evolution and speciation of plants is directly tied to the environment as the constrained stages of dispersal creates strong genetic differentiation among populations. This can result in differing genetic patterns between nuclear and chloroplast loci, where genes are inherited differently and dispersed via separate vectors. By developing distribution models based on genetic patterns found within a species, it is possible to begin understanding the influence of historic geomorphic and/or climatic processes on population evolution. If genetic patterns of the current range correlate with specific patterns of climate variability within the Pleistocene, it is possible that future shifts in species distribution in response to climate change can be more accurately modelled due to the historic signature that is found within inherited genes. Preliminary genetic analyses of Linanthus dichotomus, an annual herb distributed across California, suggests that the current taxonomic treatment does not accurately depict how this species is evolving. Genetic patterns of chloroplast genes suggest that populations are more correlated with biogeography than what the current nomenclature states. Additionally, chloroplast and nuclear genes show discrepancies in the dispersal across the landscape, suggesting pollinator driven gene flow overcoming seed dispersal boundaries. By comparing discrepancies between pollinator and seed induced gene flow we may be able to gain insight into historical pollinator communities within the Pleistocene. This information can then be applied to projected climate models to more accurately understand how species and/or communities will respond to a changing environment.
Reducing Our Carbon Footprint: Frontiers in Climate Forecasting (LBNL Science at the Theater)
Collins, Bill [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2018-06-07
Bill Collins directs Berkeley Lab's research dedicated to atmospheric and climate science. Previously, he headed the development of one of the leading climate models used in international studies of global warming. His work has confirmed that man-made greenhouse gases are probably the main culprits of recent warming and future warming poses very real challenges for the environment and society. A lead author of the most recent assessment of the science of climate change by the United Nations' Intergovernmental Panel on Climate Change, Collins wants to create a new kind of climate model, one that will integrate cutting-edge climate science with accurate predictions people can use to plan their lives
Hierarchical image-based rendering using texture mapping hardware
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, N
1999-01-15
Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less
Syntactic dependency parsers for biomedical-NLP.
Cohen, Raphael; Elhadad, Michael
2012-01-01
Syntactic parsers have made a leap in accuracy and speed in recent years. The high order structural information provided by dependency parsers is useful for a variety of NLP applications. We present a biomedical model for the EasyFirst parser, a fast and accurate parser for creating Stanford Dependencies. We evaluate the models trained in the biomedical domains of EasyFirst and Clear-Parser in a number of task oriented metrics. Both parsers provide stat of the art speed and accuracy in the Genia of over 89%. We show that Clear-Parser excels at tasks relating to negation identification while EasyFirst excels at tasks relating to Named Entities and is more robust to changes in domain.
NASA Astrophysics Data System (ADS)
Burnham, Brian Scott
Outcrop analogue studies of fluvial sedimentary systems are often undertaken to identify spatial and temporal characteristics (e.g. stacking patterns, lateral continuity, lithofacies proportions). However, the lateral extent typically exceeds that of the exposure, and/or the true width and thickness are not apparent. Accurate characterisation of fluvial sand bodies is integral for accurate identification and subsequent modelling of aquifer and hydrocarbon reservoir architecture. The studies presented in this thesis utilise techniques that integrate lidar, highresolution photography and differential geospatial measurements, to create accurate three-dimensional (3D) digital outcrop models (DOMs) of continuous 3D and laterally extensive 2D outcrop exposures. The sedimentary architecture of outcrops in the medial portion of a large Distributive Fluvial System (DFS) (Huesca fluvial fan) in the Ebro Basin, north-east Spain, and in the fluvio-deltaic succession of the Breathitt Group in the eastern Appalachian Basin, USA, are evaluated using traditional sedimentological and digital outcrop analytical techniques. The major sand bodies in the study areas are quantitatively analysed to accurately characterise spatial and temporal changes in sand body architecture, from two different outcrop exposure types and scales. Several stochastic reservoir simulations were created to approximate fluvial sand body lithological component and connectivity within the medial portion of the Huesca DFS. Results demonstrate a workflow and current methodology adaptation of digital outcrop techniques required for each study to approximate true geobody widths, thickness and characterise architectural patterns (internal and external) of major fluvial sand bodies interpreted as products of DFSs in the Huesca fluvial fan, and both palaeovalleys and progradational DFSs in the Pikeville and Hyden Formations in the Breathitt Group. The results suggest key geostatistical metrics, which are translatable across any fluvial system that can be used to analyse 3D digital outcrop data, and identify spatial attributes of sand bodies to identify their genetic origin and lithological component within fluvial reservoir systems, and the rock record. 3D quantitative analysis of major sand bodies have allowed more accurate width vs. thickness relationships within the La Serreta area, showing a vertical increase in width and channel-fill facies, and demonstrates a 22% increase of in-channel facies from previous interpretations. Additionally, identification of deposits that are products of a nodal avulsion event have been characterised and are interpreted to be the cause for the increase in width and channel-fill facies. Furthermore, analysis of the Pikeville and Hyden Fms contain sand bodies of stacked distributaries and palaeovalleys, as previously interpreted, and demonstrates that a 3D spatial approach to determine basin-wide architectural trends is integral to identifying the genetic origin, and preservation potential of sand bodies of both palaeovalleys and distributive fluvial systems. The resultant geostatistics assimilated in the thesis demonstrates the efficacy of integrated lidar studies of outcrop analogues, and provide empirical relationships which can be applied to subsurface analogues for reservoir model development and the distribution of both DFS and palaeovalley depositional systems in the rock record.
Hammer, K A; Janes, F R
1995-01-01
The objectives for developing the participative method of subject definition were to gain all the relevant information to a high level of fidelity in the earliest stages of the work and so be able to build a realistic model at reduced labour cost. In order to better integrate the two activities--information acquisition and mathematical modelling--a procedure was devised using the methods of interactive management to facilitate teamwork. This procedure provided the techniques to create suitable working relationships between the two groups, the informants and the modellers, so as to maximize their free and accurate intercommunication, both during the initial definition of the linen service and during the monitoring of the accuracy and reality of the draft models. The objectives of this project were met in that the final model was quickly validated and approved, at a low labour cost.
Creating and validating cis-regulatory maps of tissue-specific gene expression regulation
O'Connor, Timothy R.; Bailey, Timothy L.
2014-01-01
Predicting which genomic regions control the transcription of a given gene is a challenge. We present a novel computational approach for creating and validating maps that associate genomic regions (cis-regulatory modules–CRMs) with genes. The method infers regulatory relationships that explain gene expression observed in a test tissue using widely available genomic data for ‘other’ tissues. To predict the regulatory targets of a CRM, we use cross-tissue correlation between histone modifications present at the CRM and expression at genes within 1 Mbp of it. To validate cis-regulatory maps, we show that they yield more accurate models of gene expression than carefully constructed control maps. These gene expression models predict observed gene expression from transcription factor binding in the CRMs linked to that gene. We show that our maps are able to identify long-range regulatory interactions and improve substantially over maps linking genes and CRMs based on either the control maps or a ‘nearest neighbor’ heuristic. Our results also show that it is essential to include CRMs predicted in multiple tissues during map-building, that H3K27ac is the most informative histone modification, and that CAGE is the most informative measure of gene expression for creating cis-regulatory maps. PMID:25200088
NASA Astrophysics Data System (ADS)
Kalkisim, A. T.; Hasiloglu, A. S.; Bilen, K.
2016-04-01
Due to the refrigerant gas R134a which is used in automobile air conditioning systems and has greater global warming impact will be phased out gradually, an alternative gas is being desired to be used without much change on existing air conditioning systems. It is aimed to obtain the easier solution for intermediate values on the performance by creating a Neural Network Model in case of using a fluid (R152a) in automobile air conditioning systems that has the thermodynamic properties close to each other and near-zero global warming impact. In this instance, a network structure giving the most accurate result has been established by identifying which model provides the best education with which network structure and makes the most accurate predictions in the light of the data obtained after five different ANN models was trained with three different network structures. During training of Artificial Neural Network, Quick Propagation, Quasi-Newton, Levenberg-Marquardt and Conjugate Gradient Descent Batch Back Propagation methodsincluding five inputs and one output were trained with various network structures. Over 1500 iterations have been evaluated and the most appropriate model was identified by determining minimum error rates. The accuracy of the determined ANN model was revealed by comparing with estimates made by the Multi-Regression method.
Building hierarchical models of avian distributions for the State of Georgia
Howell, J.E.; Peterson, J.T.; Conroy, M.J.
2008-01-01
To predict the distributions of breeding birds in the state of Georgia, USA, we built hierarchical models consisting of 4 levels of nested mapping units of decreasing area: 90,000 ha, 3,600 ha, 144 ha, and 5.76 ha. We used the Partners in Flight database of point counts to generate presence and absence data at locations across the state of Georgia for 9 avian species: Acadian flycatcher (Empidonax virescens), brownheaded nuthatch (Sitta pusilla), Carolina wren (Thryothorus ludovicianus), indigo bunting (Passerina cyanea), northern cardinal (Cardinalis cardinalis), prairie warbler (Dendroica discolor), yellow-billed cuckoo (Coccyxus americanus), white-eyed vireo (Vireo griseus), and wood thrush (Hylocichla mustelina). At each location, we estimated hierarchical-level-specific habitat measurements using the Georgia GAP Analysis18 class land cover and other Geographic Information System sources. We created candidate, species-specific occupancy models based on previously reported relationships, and fit these using Markov chain Monte Carlo procedures implemented in OpenBugs. We then created a confidence model set for each species based on Akaike's Information Criterion. We found hierarchical habitat relationships for all species. Three-fold cross-validation estimates of model accuracy indicated an average overall correct classification rate of 60.5%. Comparisons with existing Georgia GAP Analysis models indicated that our models were more accurate overall. Our results provide guidance to wildlife scientists and managers seeking predict avian occurrence as a function of local and landscape-level habitat attributes.
An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics
Eskinazi, Ilan
2016-01-01
Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761
Prediction of Turbulence-Generated Noise in Unheated Jets. Part 2; JeNo Users' Manual (Version 1.0)
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Wolter, John D.; Koch, L. Danielle
2009-01-01
JeNo (Version 1.0) is a Fortran90 computer code that calculates the far-field sound spectral density produced by axisymmetric, unheated jets at a user specified observer location and frequency range. The user must provide a structured computational grid and a mean flow solution from a Reynolds-Averaged Navier Stokes (RANS) code as input. Turbulence kinetic energy and its dissipation rate from a k-epsilon or k-omega turbulence model must also be provided. JeNo is a research code, and as such, its development is ongoing. The goal is to create a code that is able to accurately compute far-field sound pressure levels for jets at all observer angles and all operating conditions. In order to achieve this goal, current theories must be combined with the best practices in numerical modeling, all of which must be validated by experiment. Since the acoustic predictions from JeNo are based on the mean flow solutions from a RANS code, quality predictions depend on accurate aerodynamic input.This is why acoustic source modeling, turbulence modeling, together with the development of advanced measurement systems are the leading areas of research in jet noise research at NASA Glenn Research Center.
Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches
ERIC Educational Resources Information Center
Forward, Erin; Leahey, Amber; Trimble, Leanne
2015-01-01
Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.
Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less
Zhong, Nianbing; Liao, Qiang; Zhu, Xun; Chen, Rong
2014-04-15
A new simple fiber-optic evanescent wave sensor was created to accurately monitor the growth and hydrogen production performance of biofilms. The proposed sensor consists of two probes (i.e., a sensor and reference probe), using the etched fibers with an appropriate surface roughness to improve its sensitivity. The sensor probe measures the biofilm growth and change of liquid-phase concentration inside the biofilm. The reference probe is coated with a hydrophilic polytetrafluoroethylene membrane to separate the liquids from photosynthetic bacteria Rhodopseudomonas palustris CQK 01 and to measure the liquid concentration. We also developed a model to demonstrate the accuracy of the measurement. The biofilm measurement was calibrated using an Olympus microscope. A linear relationship was obtained for the biofilm thickness range from 0 to 120 μm with a synthetic medium under continuous supply to the bioreactor. The highest level of hydrogen production rate occurred at a thickness of 115 μm.
Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator
Asaad, Sameh W.; Kapur, Mohit
2016-03-15
A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.
Ames Stereo Pipeline for Operation IceBridge
NASA Astrophysics Data System (ADS)
Beyer, R. A.; Alexandrov, O.; McMichael, S.; Fong, T.
2017-12-01
We are using the NASA Ames Stereo Pipeline to process Operation IceBridge Digital Mapping System (DMS) images into terrain models and to align them with the simultaneously acquired LIDAR data (ATM and LVIS). The expected outcome is to create a contiguous, high resolution terrain model for each flight that Operation IceBridge has flown during its eight year history of Arctic and Antarctic flights. There are some existing terrain models in the NSIDC repository that cover 2011 and 2012 (out of the total period of 2009 to 2017), which were made with the Agisoft Photoscan commercial software. Our open-source stereo suite has been verified to create terrains of similar quality. The total number of images we expect to process is around 5 million. There are numerous challenges with these data: accurate determination and refinement of camera pose when the images were acquired based on data logged during the flights and/or using information from existing orthoimages, aligning terrains with little or no features, images containing clouds, JPEG artifacts in input imagery, inconsistencies in how data was acquired/archived over the entire period, not fully reliable camera calibration files, and the sheer amount of data. We will create the majority of terrain models at 40 cm/pixel with a vertical precision of 10 to 20 cm. In some circumstances when the aircraft was flying higher than usual, those values will get coarser. We will create orthoimages at 10 cm/pixel (with the same caveat that some flights are at higher altitudes). These will differ from existing orthoimages by using the underlying terrain we generate rather than some pre-existing very low-resolution terrain model that may differ significantly from what is on the ground at the time of IceBridge acquisition.The results of this massive processing will be submitted to the NSIDC so that cryosphere researchers will be able to use these data for their investigations.
A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.
Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul
2015-09-01
An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.
NASA Astrophysics Data System (ADS)
Bracco, Annalisa; Kucharski, Fred; Molteni, Franco; Hazeleger, Wilco; Severijns, Camiel
2007-04-01
This study investigates how accurately the interannual variability over the Indian Ocean basin and the relationship between the Indian summer monsoon and the El Niño Southern Oscillation (ENSO) can be simulated by different modelling strategies. With a hierarchy of models, from an atmospherical general circulation model (AGCM) forced by observed SST, to a coupled model with the ocean component limited to the tropical Pacific and Indian Oceans, the role of heat fluxes and of interactive coupling is analyzed. Whenever sea surface temperature anomalies in the Indian basin are created by the coupled model, the inverse relationship between the ENSO index and the Indian summer monsoon rainfall is recovered, and it is preserved if the atmospherical model is forced by the SSTs created by the coupled model. If the ocean model domain is limited to the Indian Ocean, changes in the Walker circulation over the Pacific during El-Niño years induce a decrease of rainfall over the Indian subcontinent. However, the observed correlation between ENSO and the Indian Ocean zonal mode (IOZM) is not properly modelled and the two indices are not significantly correlated, independently on season. Whenever the ocean domain extends to the Pacific, and ENSO can impact both the atmospheric circulation and the ocean subsurface in the equatorial Eastern Indian Ocean, modelled precipitation patterns associated both to ENSO and to the IOZM closely resemble the observations.
Hybrid 3D printing: a game-changer in personalized cardiac medicine?
Kurup, Harikrishnan K N; Samuel, Bennett P; Vettukattil, Joseph J
2015-12-01
Three-dimensional (3D) printing in congenital heart disease has the potential to increase procedural efficiency and patient safety by improving interventional and surgical planning and reducing radiation exposure. Cardiac magnetic resonance imaging and computed tomography are usually the source datasets to derive 3D printing. More recently, 3D echocardiography has been demonstrated to derive 3D-printed models. The integration of multiple imaging modalities for hybrid 3D printing has also been shown to create accurate printed heart models, which may prove to be beneficial for interventional cardiologists, cardiothoracic surgeons, and as an educational tool. Further advancements in the integration of different imaging modalities into a single platform for hybrid 3D printing and virtual 3D models will drive the future of personalized cardiac medicine.
Alternative Fuels Data Center: Reliable Temperature Compensation is
Technical Bulletin addresses the potential hazards created by failure of compressed natural gas (CNG ) dispensers that do not accurately compensate for the temperature of the natural gas in vehicle storage containers as they are filled and the history of serious incidents as a result. Accurate temperature
Cranial reconstruction: 3D biomodel and custom-built implant created using additive manufacturing.
Jardini, André Luiz; Larosa, Maria Aparecida; Maciel Filho, Rubens; Zavaglia, Cecília Amélia de Carvalho; Bernardes, Luis Fernando; Lambert, Carlos Salles; Calderoni, Davi Reis; Kharmandayan, Paulo
2014-12-01
Additive manufacturing (AM) technology from engineering has helped to achieve several advances in the medical field, particularly as far as fabrication of implants is concerned. The use of AM has made it possible to carry out surgical planning and simulation using a three-dimensional physical model which accurately represents the patient's anatomy. AM technology enables the production of models and implants directly from a 3D virtual model, facilitating surgical procedures and reducing risks. Furthermore, AM has been used to produce implants designed for individual patients in areas of medicine such as craniomaxillofacial surgery, with optimal size, shape and mechanical properties. This work presents AM technologies which were applied to design and fabricate a biomodel and customized implant for the surgical reconstruction of a large cranial defect. A series of computed tomography data was obtained and software was used to extract the cranial geometry. The protocol presented was used to create an anatomic biomodel of the bone defect for surgical planning and, finally, the design and manufacture of the patient-specific implant. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Davenport, Jack H.
2016-05-01
Intelligence analysts demand rapid information fusion capabilities to develop and maintain accurate situational awareness and understanding of dynamic enemy threats in asymmetric military operations. The ability to extract relationships between people, groups, and locations from a variety of text datasets is critical to proactive decision making. The derived network of entities must be automatically created and presented to analysts to assist in decision making. DECISIVE ANALYTICS Corporation (DAC) provides capabilities to automatically extract entities, relationships between entities, semantic concepts about entities, and network models of entities from text and multi-source datasets. DAC's Natural Language Processing (NLP) Entity Analytics model entities as complex systems of attributes and interrelationships which are extracted from unstructured text via NLP algorithms. The extracted entities are automatically disambiguated via machine learning algorithms, and resolution recommendations are presented to the analyst for validation; the analyst's expertise is leveraged in this hybrid human/computer collaborative model. Military capability is enhanced by these NLP Entity Analytics because analysts can now create/update an entity profile with intelligence automatically extracted from unstructured text, thereby fusing entity knowledge from structured and unstructured data sources. Operational and sustainment costs are reduced since analysts do not have to manually tag and resolve entities.
Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.
2007-01-01
The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Use of 3D reconstruction cloacagrams and 3D printing in cloacal malformations.
Ahn, Jennifer J; Shnorhavorian, Margarett; Amies Oelschlager, Anne-Marie E; Ripley, Beth; Shivaram, Giridhar M; Avansino, Jeffrey R; Merguerian, Paul A
2017-08-01
Cloacal anomalies are complex to manage, and the anatomy affects prognosis and management. Assessment historically includes examination under anesthesia, and genitography is often performed, but these do not consistently capture three-dimensional (3D) detail or spatial relationships of the anatomic structures. Three-dimensional reconstruction cloacagrams can provide a high level of detail including channel measurements and the level of the cloaca (<3 cm vs. >3 cm), which typically determines the approach for surgical reconstruction and can impact long-term prognosis. Yet this imaging modality has not yet been directly compared with intra-operative or endoscopic findings. Our objective was to compare 3D reconstruction cloacagrams with endoscopic and intraoperative findings, as well as to describe the use of 3D printing to create models for surgical planning and education. An IRB-approved retrospective review of all cloaca patients seen by our multi-disciplinary program from 2014 to 2016 was performed. All patients underwent examination under anesthesia, endoscopy, 3D reconstruction cloacagram, and subsequent reconstructive surgery at a later date. Patient characteristics, intraoperative details, and measurements from endoscopy and cloacagram were reviewed and compared. One of the 3D cloacagrams was reformatted for 3D printing to create a model for surgical planning. Four patients were included for review, with the Figure illustrating 3D cloacagram results. Measurements of common channel length and urethral length were similar between modalities, particularly with confirming the level of cloaca. No patient experienced any complications or adverse effects from cloacagram or endoscopy. A model was successfully created from cloacagram images with the use of 3D printing technology. Accurate preoperative assessment for cloacal anomalies is important for counseling and surgical planning. Three-dimensional cloacagrams have been shown to yield a high level of anatomic detail. Here, cloacagram measurements are shown to correlate well with endoscopic and intraoperative findings with regards to level of cloaca and Müllerian development. Measurement discrepancies may be due to technical variation indicating a need for further evaluation. The translation of the cloacagram images into a 3D printed model demonstrates potential applications of these models for pre-operative planning and education of both families and trainees. In our series, 3D reconstruction cloacagrams yielded accurate measurements of urethral length and level of cloaca common channel and urethral length, similar to those found on endoscopy. Three-dimensional models can be printed from using cloacagram images, and may be useful for surgical planning and education. Copyright © 2017 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakwaski, W.
2008-03-01
Comprehensive computer simulations are currently the most efficient and cheap methods in designing and optimisation of semiconductor device structures. Seemingly they should be as exact as possible, but in practice it is well known that the most exact approaches are also the most involved and the most time-consuming ones and need powerful computers. In some cases, cheaper somewhat simplified modelling simulations are sufficiently accurate. Therefore, an appropriate modelling approach should be chosen taking into account a compromise between our needs and our possibilities. Modelling of operation and designing of structures of vertical-cavity surface-emitting diode lasers (VCSELs) requires appropriate mathematical description of physical processes crucial for devices operation, i.e., various optical, electrical, thermal, recombination and sometimes also mechanical phenomena taking place within their volumes. Equally important are mutual interactions between above individual processes, usually strongly non-linear and creating a real network of various inter-relations. Chain is as strong as its weakest link. Analogously, model is as exact as its less exact part. Therefore it is useless to improve exactness of its more accurate parts and not to care about less exact ones. All model parts should exhibit similar accuracy. In any individual case, a reasonable compromise should be reached between high modelling fidelity and its practical convenience depending on a main modelling goal, importance and urgency of expected results, available equipment and also financial possibilities. In the present paper, some simplifications used in VCSEL modelling are discussed and their impact on exactness of VCSEL designing is analysed.
Image analysis software as a strategy to improve the radiographic determination of fracture healing.
Duryea, Jeffrey; Evans, Christopher; Glatt, Vaida
2018-05-28
To develop and validate an unbiased, accurate, convenient and inexpensive means of determining when an osseous defect has healed and recovered sufficient strength to allow weight-bearing. A novel image processing software algorithm was created to analyze the radiographic images and produce a metric designed to reflect the bone strength. We used a rat femoral segmental defect model that provides a range of healing responses from complete union to non-union. Femora were examined by X-ray, micro-computed tomography (µCT) and mechanical testing. Accurate simulated radiographic images at different incident X-ray beam angles were produced from the µCT data files. The software-generated metric (SC) showed high levels of correlation with both the mechanical strength (τMech) and the polar moment of inertia (pMOI), with the mechanical testing data having the highest association. The optimization analysis yielded optimal oblique angles θB of 125° for τMech and 50° for pMOI. The Pearson's R values for the optimized model were 0.71 and 0.64 for τMech and pMOI, respectively. Further validation using true radiographs also demonstrated that the metric was accurate, and that the simulations were realistic. The preliminary findings suggest a very promising methodology to assess bone fracture healing using conventional radiography. With radiographs acquired at appropriate incident angles, it proved possible to calculate accurately the degree of healing and the mechanical strength of the bone. Further research is necessary to refine this approach and determine whether it translates to the human clinical setting.
[Establishment of database with standard 3D tooth crowns based on 3DS MAX].
Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun
2009-08-01
The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Advanced Modeling Strategies for the Analysis of Tile-Reinforced Composite Armor
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Chen, Tzi-Kang
1999-01-01
A detailed investigation of the deformation mechanisms in tile-reinforced armored components was conducted to develop the most efficient modeling strategies for the structural analysis of large components of the Composite Armored Vehicle. The limitations of conventional finite elements with respect to the analysis of tile-reinforced structures were examined, and two complementary optimal modeling strategies were developed. These strategies are element layering and the use of a tile-adhesive superelement. Element layering is a technique that uses stacks of shear deformable shell elements to obtain the proper transverse shear distributions through the thickness of the laminate. The tile-adhesive superelement consists of a statically condensed substructure model designed to take advantage of periodicity in tile placement patterns to eliminate numerical redundancies in the analysis. Both approaches can be used simultaneously to create unusually efficient models that accurately predict the global response by incorporating the correct local deformation mechanisms.
Wood, Clive; Alwati, Abdolati; Halsey, Sheelagh; Gough, Tim; Brown, Elaine; Kelly, Adrian; Paradkar, Anant
2016-09-10
The use of near infra red spectroscopy to predict the concentration of two pharmaceutical co-crystals; 1:1 ibuprofen-nicotinamide (IBU-NIC) and 1:1 carbamazepine-nicotinamide (CBZ-NIC) has been evaluated. A partial least squares (PLS) regression model was developed for both co-crystal pairs using sets of standard samples to create calibration and validation data sets with which to build and validate the models. Parameters such as the root mean square error of calibration (RMSEC), root mean square error of prediction (RMSEP) and correlation coefficient were used to assess the accuracy and linearity of the models. Accurate PLS regression models were created for both co-crystal pairs which can be used to predict the co-crystal concentration in a powder mixture of the co-crystal and the active pharmaceutical ingredient (API). The IBU-NIC model had smaller errors than the CBZ-NIC model, possibly due to the complex CBZ-NIC spectra which could reflect the different arrangement of hydrogen bonding associated with the co-crystal compared to the IBU-NIC co-crystal. These results suggest that NIR spectroscopy can be used as a PAT tool during a variety of pharmaceutical co-crystal manufacturing methods and the presented data will facilitate future offline and in-line NIR studies involving pharmaceutical co-crystals. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand
2017-11-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Multimodel ensembles of wheat growth: many models are better than one.
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W; Rötter, Reimund P; Boote, Kenneth J; Ruane, Alex C; Thorburn, Peter J; Cammarano, Davide; Hatfield, Jerry L; Rosenzweig, Cynthia; Aggarwal, Pramod K; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richie; Grant, Robert F; Heng, Lee; Hooker, Josh; Hunt, Leslie A; Ingwersen, Joachim; Izaurralde, Roberto C; Kersebaum, Kurt Christian; Müller, Christoph; Kumar, Soora Naresh; Nendel, Claas; O'leary, Garry; Olesen, Jørgen E; Osborne, Tom M; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Semenov, Mikhail A; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; White, Jeffrey W; Wolf, Joost
2015-02-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models. © 2014 John Wiley & Sons Ltd.
Multimodel Ensembles of Wheat Growth: More Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alex C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24-38% for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
Multimodel Ensembles of Wheat Growth: Many Models are Better than One
NASA Technical Reports Server (NTRS)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold; Ewert, Frank; Jones, James W.; Rotter, Reimund P.; Boote, Kenneth J.; Ruane, Alexander C.; Thorburn, Peter J.; Cammarano, Davide;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop model scan give valuable information about model accuracy and uncertainty, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 2438 for the different end-of-season variables including grain yield (GY) and grain protein concentration (GPC). There was little relation between error of a model for GY or GPC and error for in-season variables. Thus, most models did not arrive at accurate simulations of GY and GPC by accurately simulating preceding growth dynamics. Ensemble simulations, taking either the mean (e-mean) or median (e-median) of simulated values, gave better estimates than any individual model when all variables were considered. Compared to individual models, e-median ranked first in simulating measured GY and third in GPC. The error of e-mean and e-median declined with an increasing number of ensemble members, with little decrease beyond 10 models. We conclude that multimodel ensembles can be used to create new estimators with improved accuracy and consistency in simulating growth dynamics. We argue that these results are applicable to other crop species, and hypothesize that they apply more generally to ecological system models.
Using airborne geophysical surveys to improve groundwater resource management models
Abraham, Jared D.; Cannia, James C.; Peterson, Steven M.; Smith, Bruce D.; Minsley, Burke J.; Bedrosian, Paul A.
2010-01-01
Increasingly, groundwater management requires more accurate hydrogeologic frameworks for groundwater models. These complex issues have created the demand for innovative approaches to data collection. In complicated terrains, groundwater modelers benefit from continuous high‐resolution geologic maps and their related hydrogeologic‐parameter estimates. The USGS and its partners have collaborated to use airborne geophysical surveys for near‐continuous coverage of areas of the North Platte River valley in western Nebraska. The survey objectives were to map the aquifers and bedrock topography of the area to help improve the understanding of groundwater‐surface‐water relationships, leading to improved water management decisions. Frequency‐domain heliborne electromagnetic surveys were completed, using a unique survey design to collect resistivity data that can be related to lithologic information to refine groundwater model inputs. To render the geophysical data useful to multidimensional groundwater models, numerical inversion is necessary to convert the measured data into a depth‐dependent subsurface resistivity model. This inverted model, in conjunction with sensitivity analysis, geological ground truth (boreholes and surface geology maps), and geological interpretation, is used to characterize hydrogeologic features. Interpreted two‐ and three‐dimensional data coverage provides the groundwater modeler with a high‐resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. This method of creating hydrogeologic frameworks improved the understanding of flow path orientation by redefining the location of the paleochannels and associated bedrock highs. The improved models reflect actual hydrogeology at a level of accuracy not achievable using previous data sets.
NASA Technical Reports Server (NTRS)
Iverson, Louis R.; Cook, Elizabeth A.; Graham, Robin L.; Olson, Jerry S.; Frank, Thomas D.; Ying, KE
1988-01-01
The objective was to relate spectral imagery of varying resolution with ground-based data on forest productivity and cover, and to create models to predict regional estimates of forest productivity and cover with a quantifiable degree of accuracy. A three stage approach was outlined. In the first stage, a model was developed relating forest cover or productivity to TM surface reflectance values (TM/FOREST models). The TM/FOREST models were more accurate when biogeographic information regarding the landscape was either used to stratigy the landscape into more homogeneous units or incorporated directly into the TM/FOREST model. In the second stage, AVHRR/FOREST models that predicted forest cover and productivity on the basis of AVHRR band values were developed. The AVHRR/FOREST models had statistical properties similar to or better than those of the TM/FOREST models. In the third stage, the regional predictions were compared with the independent U.S. Forest Service (USFS) data. To do this regional forest cover and forest productivity maps were created using AVHRR scenes and the AVHRR/FOREST models. From the maps the county values of forest productivity and cover were calculated. It is apparent that the landscape has a strong influence on the success of the approach. An approach of using nested scales of imagery in conjunction with ground-based data can be successful in generating regional estimates of variables that are functionally related to some variable a sensor can detect.
Numerical simulation of human orientation perception during lunar landing
NASA Astrophysics Data System (ADS)
Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.
2011-09-01
In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Geopressure modeling from petrophysical data: An example from East Kalimantan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herkommer, M.A.
1994-07-01
Localized models of abnormal formation pressure (geopressure) are important economic and safety tools frequently used for well planning and drilling operations. Simplified computer-based procedures have been developed that permit these models to be developed more rapidly and with greater accuracy. These techniques are broadly applicable to basins throughout the world where abnormal formation pressures occur. An example from the Attaka field of East Kalimantan, southeast Asia, shows how geopressure models are developed. Using petrophysical and engineering data, empirical correlations between observed pressure and petrophysical logs can be created by computer-assisted data-fitting techniques. These correlations serve as the basis for modelsmore » of the geopressure. By performing repeated analyses on wells at various locations, contour maps on the top of abnormal geopressure can be created. Methods that are simple in their development and application make the task of geopressure estimation less formidable to the geologist and petroleum engineer. Further, more accurate estimates can significantly improve drilling speeds while reducing the incidence of stuck pipe, kicks, and blowouts. In general, geopressure estimates are used in all phases of drilling operations: To develop mud plans and specify equipment ratings, to assist in the recognition of geopressured formations and determination of mud weights, and to improve predictions at offset locations and geologically comparable areas.« less
Energy balance and mass conservation in reduced order models of fluid flows
NASA Astrophysics Data System (ADS)
Mohebujjaman, Muhammad; Rebholz, Leo G.; Xie, Xuping; Iliescu, Traian
2017-10-01
In this paper, we investigate theoretically and computationally the conservation properties of reduced order models (ROMs) for fluid flows. Specifically, we investigate whether the ROMs satisfy the same (or similar) energy balance and mass conservation as those satisfied by the Navier-Stokes equations. All of our theoretical findings are illustrated and tested in numerical simulations of a 2D flow past a circular cylinder at a Reynolds number Re = 100. First, we investigate the ROM energy balance. We show that using the snapshot average for the centering trajectory (which is a popular treatment of nonhomogeneous boundary conditions in ROMs) yields an incorrect energy balance. Then, we propose a new approach, in which we replace the snapshot average with the Stokes extension. Theoretically, the Stokes extension produces an accurate energy balance. Numerically, the Stokes extension yields more accurate results than the standard snapshot average, especially for longer time intervals. Our second contribution centers around ROM mass conservation. We consider ROMs created using two types of finite elements: the standard Taylor-Hood (TH) element, which satisfies the mass conservation weakly, and the Scott-Vogelius (SV) element, which satisfies the mass conservation pointwise. Theoretically, the error estimates for the SV-ROM are sharper than those for the TH-ROM. Numerically, the SV-ROM yields significantly more accurate results, especially for coarser meshes and longer time intervals.
Modeling of screening currents in coated conductor magnets containing up to 40000 turns
NASA Astrophysics Data System (ADS)
Pardo, E.
2016-08-01
Screening currents caused by varying magnetic fields degrade the homogeneity and stability of the magnetic fields created by REBCO coated conductor coils. They are responsible for the AC loss; which is also important for other power applications containing windings, such as transformers, motors and generators. Since real magnets contain coils exceeding 10000 turns, accurate modeling tools for this number of turns or above are necessary for magnet design. This article presents a fast numerical method to model coils with no loss of accuracy. We model a 10400-turn coil for its real number of turns and coils of up to 40000 turns with continuous approximation, which introduces negligible errors. The screening currents, the screening current induced field (SCIF) and the AC loss is analyzed in detail. The SCIF is at a maximum at the remnant state with a considerably large value. The instantaneous AC loss for an anisotropic magnetic-field dependent J c is qualitatively different than for a constant J c , although the loss per cycle is similar. Saturation of the magnetization currents at the end pancakes causes the maximum AC loss at the first ramp to increase with J c . The presented modeling tool can accurately calculate the SCIF and AC loss in practical computing times for coils with any number of turns used in real windings, enabling parameter optimization.
Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties
DOE R&D Accomplishments Database
Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.
1987-12-01
The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.
Motorcycle waste heat energy harvesting
NASA Astrophysics Data System (ADS)
Schlichting, Alexander D.; Anton, Steven R.; Inman, Daniel J.
2008-03-01
Environmental concerns coupled with the depletion of fuel sources has led to research on ethanol, fuel cells, and even generating electricity from vibrations. Much of the research in these areas is stalling due to expensive or environmentally contaminating processes, however recent breakthroughs in materials and production has created a surge in research on waste heat energy harvesting devices. The thermoelectric generators (TEGs) used in waste heat energy harvesting are governed by the Thermoelectric, or Seebeck, effect, generating electricity from a temperature gradient. Some research to date has featured platforms such as heavy duty diesel trucks, model airplanes, and automobiles, attempting to either eliminate heavy batteries or the alternator. A motorcycle is another platform that possesses some very promising characteristics for waste heat energy harvesting, mainly because the exhaust pipes are exposed to significant amounts of air flow. A 1995 Kawasaki Ninja 250R was used for these trials. The module used in these experiments, the Melcor HT3-12-30, produced an average of 0.4694 W from an average temperature gradient of 48.73 °C. The mathematical model created from the Thermoelectric effect equation and the mean Seebeck coefficient displayed by the module produced an average error from the experimental data of 1.75%. Although the module proved insufficient to practically eliminate the alternator on a standard motorcycle, the temperature data gathered as well as the examination of a simple, yet accurate, model represent significant steps in the process of creating a TEG capable of doing so.
NASA Astrophysics Data System (ADS)
Prather, Edward
2018-01-01
Astronomy education researchers in the Department of Astronomy at the University of Arizona have been investigating a new framework for getting students to engage in discussions about fundamental astronomy topics. This framework is intended to also provide students with explicit feedback on the correctness and coherency of their mental models on these topics. This framework builds upon our prior efforts to create productive Pedagogical Discipline Representations (PDR). Students are asked to work collaboratively to generate their own representations (drawings, graphs, data tables, etc.) that reflect important characteristics of astrophysical scenarios presented in class. We have found these representation tasks offer tremendous insight into the broad range of ideas and knowledge students possess after instruction that includes both traditional lecture and actively learning strategies. In particular, we find that some of our students are able to correctly answer challenging multiple-choice questions on topics, however, they struggle to accurately create representations of these same topics themselves. Our work illustrates that some of our students are not developing a robust level of discipline fluency with many core ideas in astronomy, even after engaging with active learning strategies.
Developing Land Surface Type Map with Biome Classification Scheme Using Suomi NPP/JPSS VIIRS Data
NASA Astrophysics Data System (ADS)
Zhang, Rui; Huang, Chengquan; Zhan, Xiwu; Jin, Huiran
2016-08-01
Accurate representation of actual terrestrial surface types at regional to global scales is an important element for a wide range of applications, such as land surface parameterization, modeling of biogeochemical cycles, and carbon cycle studies. In this study, in order to meet the requirement of the retrieval of global leaf area index (LAI) and fraction of photosynthetically active radiation absorbed by the vegetation (fPAR) and other studies, a global map generated from Suomi National Polar- orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) surface reflectance data in six major biome classes based on their canopy structures, which include: Grass/Cereal Crops, Shrubs, Broadleaf Crops, Savannas, Broadleaf Forests, and Needleleaf Forests, was created. The primary biome classes were converted from an International Geosphere-Biosphere Program (IGBP) legend global surface type data that was created in previous study, and the separation of two crop types are based on a secondary classification.
NASA Astrophysics Data System (ADS)
Frankl, Amaury; Stal, Cornelis; Abraha, Amanuel; De Wulf, Alain; Poesen, Jean
2014-05-01
Taking climate change scenarios into account, rainfall patterns are likely to change over the coming decades in eastern Africa. In brief, large parts of eastern Africa are expected to experience a wetting, including seasonality changes. Gullies are threshold phenomena that accomplish most of their geomorphic change during short periods of strong rainfall. Understanding the links between geomorphic change and rainfall characteristics in detail, is thus crucial to ensure the sustainability of future land management. In this study, we present image-based 3D modelling as a low-cost, flexible and rapid method to quantify gully morphology from terrestrial photographs. The methodology was tested on two gully heads in Northern Ethiopia. Ground photographs (n = 88-235) were taken during days with cloud cover. The photographs were processed in PhotoScan software using a semi-automated Structure from Motion-Multi View Stereo (SfM-MVS) workflow. As a result, full 3D models were created, accurate at cm level. These models allow to quantify gully morphology in detail, including information on undercut walls and soil pipe inlets. Such information is crucial for understanding the hydrogeomorphic processes involved. Producing accurate 3D models after each rainfall event, allows to model interrelations between rainfall, land management, runoff and erosion. Expected outcomes are the production of detailed vulnerability maps that allow to design soil and water conservation measures in a cost-effective way. Keywords: 3D model, Ethiopia, Image-based 3D modelling, Gully, PhotoScan, Rainfall.
Wexler's Great Smoke Pall: a chemistry-climate model analysis of a singularly large emissions pulse
NASA Astrophysics Data System (ADS)
Field, R. D.; Voulgarakis, A.
2011-12-01
We model the effects of the smoke plume from what was arguably the largest forest fire in recorded history. The Chinchaga fire burned continuously during the summer of 1950 in northwestern Canada during a very dry fire season. On September 22nd, the fire made a major advance, burning an area of approximately 1400 km2. Ground and aircraft observations showed that from September 22 to 28, the smoke plume from the emissions pulse travelled over northern Canada, southward over the Great Lakes region and eastern US, across the Atlantic, and to Western Europe. Over the Great Lakes region, the plume remained thick enough to create twilight conditions in the mid-afternoon, and was estimated to have caused a 4 oC cooling at the surface. While many instances of long-range transport of wildfire emissions have been detected over the past decade, we know of no other wildfire which created such an acute effect on downward shortwave radiation at such a long distance. As a result, the fire was an important analogue event used in estimating the effects of a nuclear winter. Simulations with the nudged version of the GISS chemistry-climate model accurately capture the long-range transport pattern of the smoke emissions in the free-troposphere. The timing and location of aircraft observations of the plume over the eastern US, North Atlantic and the United Kingdom were well-matched to modeled anomalies of CO and aerosol optical depth. Further work will examine the model's ability to create twilight conditions during the day, and to provide an estimate of the consequent cooling effects at the surface from this remarkable emissions pulse.
A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.
Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T
2017-02-01
To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Using integrated modeling for generating watershed-scale dynamic flood maps for Hurricane Harvey
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.; Singhofen, P. J.
2017-12-01
Hurricane Harvey, which was categorized as a 1000-year return period event, produced unprecedented rainfall and flooding in Houston. Although the expected rainfall was forecasted much before the event, there was no way to identify which regions were at higher risk of flooding, the magnitude of flooding, and when the impacts of rainfall would be highest. The inability to predict the location, duration, and depth of flooding created uncertainty over evacuation planning and preparation. This catastrophic event highlighted that the conventional approach to managing flood risk using 100-year static flood inundation maps is inadequate because of its inability to predict flood duration and extents for 500-year or 1000-year return period events in real-time. The purpose of this study is to create models that can dynamically predict the impacts of rainfall and subsequent flooding, so that necessary evacuation and rescue efforts can be planned in advance. This study uses a 2D integrated surface water-groundwater model called ICPR (Interconnected Channel and Pond Routing) to simulate both the hydrology and hydrodynamics for Hurricane Harvey. The methodology involves using the NHD stream network to create a 2D model that incorporates rainfall, land use, vadose zone properties and topography to estimate streamflow and generate dynamic flood depths and extents. The results show that dynamic flood mapping captures the flood hydrodynamics more accurately and is able to predict the magnitude, extent and time of occurrence for extreme events such as Hurricane Harvey. Therefore, integrated modeling has the potential to identify regions that are more susceptible to flooding, which is especially useful for large-scale planning and allocation of resources for protection against future flood risk.
Warriner, David R; Brown, Alistair G; Varma, Susheel; Sheridan, Paul J; Lawford, Patricia; Hose, David R; Al-Mohammad, Abdallah; Shi, Yubing
2014-01-01
The American Heart Association (AHA)/American College of Cardiology (ACC) guidelines for the classification of heart failure (HF) are descriptive but lack precise and objective measures which would assist in categorising such patients. Our aim was two fold, firstly to demonstrate quantitatively the progression of HF through each stage using a meta-analysis of existing left ventricular (LV) pressure-volume (PV) loop data and secondly use the LV PV loop data to create stage specific HF models. A literature search yielded 31 papers with PV data, representing over 200 patients in different stages of HF. The raw pressure and volume data were extracted from the papers using a digitising software package and the means were calculated. The data demonstrated that, as HF progressed, stroke volume (SV), ejection fraction (EF%) decreased while LV volumes increased. A 2-element lumped parameter model was employed to model the mean loops and the error was calculated between the loops, demonstrating close fit between the loops. The only parameter that was consistently and statistically different across all the stages was the elastance (Emax). For the first time, the authors have created a visual and quantitative representation of the AHA/ACC stages of LVSD-HF, from normal to end-stage. The study demonstrates that robust, load-independent and reproducible parameters, such as elastance, can be used to categorise and model HF, complementing the existing classification. The modelled PV loops establish previously unknown physiological parameters for each AHA/ACC stage of LVSD-HF, such as LV elastance and highlight that it this parameter alone, in lumped parameter models, that determines the severity of HF. Such information will enable cardiovascular modellers with an interest in HF, to create more accurate models of the heart as it fails.
NASA Astrophysics Data System (ADS)
Saksena, S.; Merwade, V.; Singhofen, P.
2017-12-01
There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.
Analysis of composite plates by using mechanics of structure genome and comparison with ANSYS
NASA Astrophysics Data System (ADS)
Zhao, Banghua
Motivated by a recently discovered concept, Structure Genome (SG) which is defined as the smallest mathematical building block of a structure, a new approach named Mechanics of Structure Genome (MSG) to model and analyze composite plates is introduced. MSG is implemented in a general-purpose code named SwiftComp(TM), which provides the constitutive models needed in structural analysis by homogenization and pointwise local fields by dehomogenization. To improve the user friendliness of SwiftComp(TM), a simple graphic user interface (GUI) based on ANSYS Mechanical APDL platform, called ANSYS-SwiftComp GUI is developed, which provides a convenient way to create some common SG models or arbitrary customized SG models in ANSYS and invoke SwiftComp(TM) to perform homogenization and dehomogenization. The global structural analysis can also be handled in ANSYS after homogenization, which could predict the global behavior and provide needed inputs for dehomogenization. To demonstrate the accuracy and efficiency of the MSG approach, several numerical cases are studied and compared using both MSG and ANSYS. In the ANSYS approach, 3D solid element models (ANSYS 3D approach) are used as reference models and the 2D shell element models created by ANSYS Composite PrepPost (ACP approach) are compared with the MSG approach. The results of the MSG approach agree well with the ANSYS 3D approach while being as efficient as the ACP approach. Therefore, the MSG approach provides an efficient and accurate new way to model composite plates.
NASA Astrophysics Data System (ADS)
Trombley, N.; Weber, E.; Moehl, J.
2017-12-01
Many studies invoke dasymetric mapping to make more accurate depictions of population distribution by spatially restricting populations to inhabited/inhabitable portions of observational units (e.g., census blocks) and/or by varying population density among different land classes. LandScan USA uses this approach by restricting particular population components (such as residents or workers) to building area detected from remotely sensed imagery, but also goes a step further by classifying each cell of building area in accordance with ancillary land use information from national parcel data (CoreLogic, Inc.'s ParcelPoint database). Modeling population density according to land use is critical. For instance, office buildings would have a higher density of workers than warehouses even though the latter would likely have more cells of detection. This paper presents a modeling approach by which different land uses are assigned different densities to more accurately distribute populations within them. For parts of the country where the parcel data is insufficient, an alternate methodology is developed that uses National Land Cover Database (NLCD) data to define the land use type of building detection. Furthermore, LiDAR data is incorporated for many of the largest cities across the US, allowing the independent variables to be updated from two-dimensional building detection area to total building floor space. In the end, four different regression models are created to explain the effect of different land uses on worker distribution: A two-dimensional model using land use types from the parcel data A three-dimensional model using land use types from the parcel data A two-dimensional model using land use types from the NLCD data, and A three-dimensional model using land use types from the NLCD data. By and large, the resultant coefficients followed intuition, but importantly allow the relationships between different land uses to be quantified. For instance, in the model using two-dimensional building area, commercial building area had a density 2.5 times greater than public building area and 4 times greater than industrial building area. These coefficients can be applied to define the ratios at which population is distributed to building cells. Finally, possible avenues for refining the methodology are presented.
The Columbia Thyroid Eye Disease-Compressive Optic Neuropathy Formula.
Callahan, Alison B; Campbell, Ashley A; Oropesa, Susel; Baraban, Aryeh; Kazim, Michael
2018-06-13
Diagnosing thyroid eye disease-compressive optic neuropathy (TED-CON) is challenging, particularly in cases lacking a relative afferent pupillary defect. Large case series of TED-CON patients and accessible diagnostic tools are lacking in the current literature. This study aims to create a mathematical formula that accurately predicts the presence or absence of CON based on the most salient clinical measures of optic neuropathy. A retrospective case series compares 108 patients (216 orbits) with either unilateral or bilateral TED-CON and 41 age-matched patients (82 orbits) with noncompressive TED. Utilizing clinical variables assessing optic nerve function and/or risk of compressive disease, and with the aid of generalized linear regression modeling, the authors create a mathematical formula that weighs the relative contribution of each clinical variable in the overall prediction of CON. Data from 213 orbits in 110 patients derived the formula: y = -0.69 + 2.58 × (afferent pupillary defect) - 0.31 × (summed limitation of ductions) - 0.2 × (mean deviation on Humphrey visual field testing) - 0.02 × (% color plates). This accurately predicted the presence of CON (y > 0) versus non-CON (y < 0) in 82% of cases with 83% sensitivity and 81% specificity. When there was no relative afferent pupillary defect, which was the case in 63% of CON orbits, the formula correctly predicted CON in 78% of orbits with 73% sensitivity and 83% specificity. The authors developed a mathematical formula, the Columbia TED-CON Formula (CTD Formula), that can help guide clinicians in accurately diagnosing TED-CON, particularly in the presence of bilateral disease and when no relative afferent pupillary defect is present.
Studying depression using imaging and machine learning methods.
Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J
2016-01-01
Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.
Simulating supersymmetry at the SSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.M.; Haber, H.E.
1984-08-01
Careful study of supersymmetric signatures at the SSC is required in order to distinguish them from Standard Model physics backgrounds. To this end, we have created an efficient, accurate computer program which simulates supersymmetric particle production and decay (or other new particles). We have incorporated the full matrix elements, keeping track of the polarizations of all intermediate states. (At this time hadronization of final-state partons is ignored). Using Monte Carlo techniques this program can generate any desired final-state distribution or individual events for Lego plots. Examples of the results of our study of supersymmetry at SSC are provided.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Borojeni, Azadeh A.T.; Frank-Ito, Dennis O.; Kimbell, Julia S.; Rhee, John S.; Garcia, Guilherme J. M.
2016-01-01
Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction (NAO) patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CBCT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the CBCT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically-accurate models of the nasopharynx created from thirty CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 NAO patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs. patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx. PMID:27525807
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Nonlinear modeling of chaotic time series: Theory and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casdagli, M.; Eubank, S.; Farmer, J.D.
1990-01-01
We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifyingmore » and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.« less
Drag Reduction of an Airfoil Using Deep Learning
NASA Astrophysics Data System (ADS)
Jiang, Chiyu; Sun, Anzhu; Marcus, Philip
2017-11-01
We reduced the drag of a 2D airfoil by starting with a NACA-0012 airfoil and used deep learning methods. We created a database which consists of simulations of 2D external flow over randomly generated shapes. We then developed a machine learning framework for external flow field inference given input shapes. Past work which utilized machine learning in Computational Fluid Dynamics focused on estimations of specific flow parameters, but this work is novel in the inference of entire flow fields. We further showed that learned flow patterns are transferable to cases that share certain similarities. This study illustrates the prospects of deeper integration of data-based modeling into current CFD simulation frameworks for faster flow inference and more accurate flow modeling.
Regional analysis of drought and heat impacts on forests: current and future science directions.
Law, Beverly E
2014-12-01
Accurate assessments of forest response to current and future climate and human actions are needed at regional scales. Predicting future impacts on forests will require improved analysis of species-level adaptation, resilience, and vulnerability to mortality. Land system models can be enhanced by creating trait-based groupings of species that better represent climate sensitivity, such as risk of hydraulic failure from drought. This emphasizes the need for more coordinated in situ and remote sensing observations to track changes in ecosystem function, and to improve model inputs, spatio-temporal diagnosis, and predictions of future conditions, including implications of actions to mitigate climate change. © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Fast stochastic algorithm for simulating evolutionary population dynamics
NASA Astrophysics Data System (ADS)
Tsimring, Lev; Hasty, Jeff; Mather, William
2012-02-01
Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.
Nuclear fuel in a reactor accident.
Burns, Peter C; Ewing, Rodney C; Navrotsky, Alexandra
2012-03-09
Nuclear accidents that lead to melting of a reactor core create heterogeneous materials containing hundreds of radionuclides, many with short half-lives. The long-lived fission products and transuranium elements within damaged fuel remain a concern for millennia. Currently, accurate fundamental models for the prediction of release rates of radionuclides from fuel, especially in contact with water, after an accident remain limited. Relatively little is known about fuel corrosion and radionuclide release under the extreme chemical, radiation, and thermal conditions during and subsequent to a nuclear accident. We review the current understanding of nuclear fuel interactions with the environment, including studies over the relatively narrow range of geochemical, hydrological, and radiation environments relevant to geological repository performance, and discuss priorities for research needed to develop future predictive models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereira, Ana I.; ALGORITMI,University of Minho; Lima, José
There are several approaches to create the Humanoid robot gait planning. This problem presents a large number of unknown parameters that should be found to make the humanoid robot to walk. Optimization in simulation models can be used to find the gait based on several criteria such as energy minimization, acceleration, step length among the others. The energy consumption can also be reduced with elastic elements coupled to each joint. The presented paper addresses an optimization method, the Stretched Simulated Annealing, that runs in an accurate and stable simulation model to find the optimal gait combined with elastic elements. Finalmore » results demonstrate that optimization is a valid gait planning technique.« less
Data-Driven Residential Load Modeling and Validation in GridLAB-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotseff, Peter; Lundstrom, Blake
Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less
Expert models and modeling processes associated with a computer-modeling tool
NASA Astrophysics Data System (ADS)
Zhang, Baohui; Liu, Xiufeng; Krajcik, Joseph S.
2006-07-01
Holding the premise that the development of expertise is a continuous process, this study concerns expert models and modeling processes associated with a modeling tool called Model-It. Five advanced Ph.D. students in environmental engineering and public health used Model-It to create and test models of water quality. Using think aloud technique and video recording, we captured their computer screen modeling activities and thinking processes. We also interviewed them the day following their modeling sessions to further probe the rationale of their modeling practices. We analyzed both the audio-video transcripts and the experts' models. We found the experts' modeling processes followed the linear sequence built in the modeling program with few instances of moving back and forth. They specified their goals up front and spent a long time thinking through an entire model before acting. They specified relationships with accurate and convincing evidence. Factors (i.e., variables) in expert models were clustered, and represented by specialized technical terms. Based on the above findings, we made suggestions for improving model-based science teaching and learning using Model-It.
Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi
2013-01-01
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710
NASA Astrophysics Data System (ADS)
Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin
2015-11-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
NASA Astrophysics Data System (ADS)
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2004-12-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2005-01-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Classification of hospital admissions into emergency and elective care: a machine learning approach.
Krämer, Jonas; Schreyögg, Jonas; Busse, Reinhard
2017-11-25
Rising admissions from emergency departments (EDs) to hospitals are a primary concern for many healthcare systems. The issue of how to differentiate urgent admissions from non-urgent or even elective admissions is crucial. We aim to develop a model for classifying inpatient admissions based on a patient's primary diagnosis as either emergency care or elective care and predicting urgency as a numerical value. We use supervised machine learning techniques and train the model with physician-expert judgments. Our model is accurate (96%) and has a high area under the ROC curve (>.99). We provide the first comprehensive classification and urgency categorization for inpatient emergency and elective care. This model assigns urgency values to every relevant diagnosis in the ICD catalog, and these values are easily applicable to existing hospital data. Our findings may provide a basis for policy makers to create incentives for hospitals to reduce the number of inappropriate ED admissions.
NASA Technical Reports Server (NTRS)
Seymour, David C.; Martin, Michael A.; Nguyen, Huy H.; Greene, William D.
2005-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
NASA Technical Reports Server (NTRS)
Martin, Michael A.; Nguyen, Huy H.; Greene, William D.; Seymout, David C.
2003-01-01
The subject of mathematical modeling of the transient operation of liquid rocket engines is presented in overview form from the perspective of engineers working at the NASA Marshall Space Flight Center. The necessity of creating and utilizing accurate mathematical models as part of liquid rocket engine development process has become well established and is likely to increase in importance in the future. The issues of design considerations for transient operation, development testing, and failure scenario simulation are discussed. An overview of the derivation of the basic governing equations is presented along with a discussion of computational and numerical issues associated with the implementation of these equations in computer codes. Also, work in the field of generating usable fluid property tables is presented along with an overview of efforts to be undertaken in the future to improve the tools use for the mathematical modeling process.
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.
NASA Astrophysics Data System (ADS)
Yavari Ramsheh, S.; Ataie-Ashtiani, B.
2017-12-01
Recent studies revealed that landslide-generated waves (LGWs) impose the largest tsunami hazard to our shorelines although earthquake-generated waves (EGWs) occur more often. Also, EGWs are commonly followed by a large number of landslide hazards. Dam reservoirs are more vulnerable to landslide events due to being located in mountainous areas. Accurate estimation of such hazards and their destructive consequences help authorities to reduce their risks by constructive measures. In this regard, a two-layer two-phase Coulomb mixture flow (2LCMFlow) model is applied to investigate the effects of landslide characteristics on LGWs for a real-sized simplification of the Maku dam reservoir, located in the North of Iran. A sensitivity analysis is performed on the role of landslide rheological and constitutive parameters and its initial submergence in LGW characteristics and formation patterns. The numerical results show that for a subaerial (SAL), a semi-submerged (SSL), and a submarine landslide (SML) with the same initial geometry, the SSLs can create the largest wave crest, up to 60% larger than SALs, for dense material. However, SMLs generally create the largest wave troughs and SALs travel the maximum runout distances beneath the water. Regarding the two-phase (solid-liquid) nature of the landslide, when interestial water is isolated from the water layer along the water/landslide interface, a LGW with up to 30% higher wave crest can be created. In this condition, increasing the pore water pressure within the granular layer results in up to 35% higher wave trough and 40% lower wave crest at the same time. These results signify the importance of appropriate description of two-phase nature and rheological behavior of landslides in accurate estimation of LGWs which demands further numerical, physical, and field studies about such phenomena.
A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach.
Lei, Shuangyan; Frank, Matthew C; Anderson, Donald D; Brown, Thomas D
The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are "stacked" inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate.
Evaluation of Fish Passage at Whitewater Parks Using 2D and 3D Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Hardee, T.; Nelson, P. A.; Kondratieff, M.; Bledsoe, B. P.
2016-12-01
In-stream whitewater parks (WWPs) are increasingly popular recreational amenities that typically create waves by constricting flow through a chute to increase velocities and form a hydraulic jump. However, the hydraulic conditions these structures create can limit longitudinal habitat connectivity and potentially inhibit upstream fish migration, especially of native fishes. An improved understanding of the fundamental hydraulic processes and potential environmental effects of whitewater parks is needed to inform management decisions about Recreational In-Channel Diversions (RICDs). Here, we use hydraulic models to compute a continuous and spatially explicit description of velocity and depth along potential fish swimming paths in the flow field, and the ensemble of potential paths are compared to fish swimming performance data to predict fish passage via logistic regression analysis. While 3d models have been shown to accurately predict trout movement through WWP structures, 2d methods can provide a more cost-effective and manager-friendly approach to assessing the effects of similar hydraulic structures on fish passage when 3d analysis in not feasible. Here, we use 2d models to examine the hydraulics in several WWP structures on the North Fork of the St. Vrain River at Lyons, Colorado, and we compare these model results to fish passage predictions from a 3d model. Our analysis establishes a foundation for a practical, transferable and physically-rigorous 2d modeling approach for mechanistically evaluating the effects of hydraulic structures on fish passage.
StratBAM: A Discrete-Event Simulation Model to Support Strategic Hospital Bed Capacity Decisions.
Devapriya, Priyantha; Strömblad, Christopher T B; Bailey, Matthew D; Frazier, Seth; Bulger, John; Kemberling, Sharon T; Wood, Kenneth E
2015-10-01
The ability to accurately measure and assess current and potential health care system capacities is an issue of local and national significance. Recent joint statements by the Institute of Medicine and the Agency for Healthcare Research and Quality have emphasized the need to apply industrial and systems engineering principles to improving health care quality and patient safety outcomes. To address this need, a decision support tool was developed for planning and budgeting of current and future bed capacity, and evaluating potential process improvement efforts. The Strategic Bed Analysis Model (StratBAM) is a discrete-event simulation model created after a thorough analysis of patient flow and data from Geisinger Health System's (GHS) electronic health records. Key inputs include: timing, quantity and category of patient arrivals and discharges; unit-level length of care; patient paths; and projected patient volume and length of stay. Key outputs include: admission wait time by arrival source and receiving unit, and occupancy rates. Electronic health records were used to estimate parameters for probability distributions and to build empirical distributions for unit-level length of care and for patient paths. Validation of the simulation model against GHS operational data confirmed its ability to model real-world data consistently and accurately. StratBAM was successfully used to evaluate the system impact of forecasted patient volumes and length of stay in terms of patient wait times, occupancy rates, and cost. The model is generalizable and can be appropriately scaled for larger and smaller health care settings.
Herrmann, Karl-Heinz; Gärtner, Clemens; Güllmar, Daniel; Krämer, Martin; Reichenbach, Jürgen R
2014-10-01
To evaluate low budget 3D printing technology to create MRI compatible components. A 3D printer is used to create customized MRI compatible components, a loop-coil platform and a multipart mouse fixation. The mouse fixation is custom fit for a dedicated coil and facilitates head fixation with bite bar, anesthetic gas supply and biomonitoring sensors. The mouse fixation was tested in a clinical 3T scanner. All parts were successfully printed and proved MR compatible. Both design and printing were accomplished within a few days and the final print results were functional with well defined details and accurate dimensions (Δ<0.4mm). MR images of the mouse head clearly showed reduced motion artifacts, ghosting and signal loss when using the fixation. We have demonstrated that a low budget 3D printer can be used to quickly progress from a concept to a functional device at very low production cost. While 3D printing technology does impose some restrictions on model geometry, additive printing technology can create objects with complex internal structures that can otherwise not be created by using lathe technology. Thus, we consider a 3D printer a valuable asset for MRI research groups. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Physically-based in silico light sheet microscopy for visualizing fluorescent brain models
2015-01-01
Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404
Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge
2017-05-04
Thermophysical properties of a carbon-reinforced epoxy composite laminate (T700/M21 composite for aircraft structures) were evaluated using different innovative characterisation methods. Thermogravimetric Analysis (TGA), Simultaneous Thermal analysis (STA), Laser Flash analysis (LFA), and Fourier Transform Infrared (FTIR) analysis were used for measuring the thermal decomposition, the specific heat capacity, the anisotropic thermal conductivity of the composite, the heats of decomposition and the specific heat capacity of released gases. It permits to get input data to feed a three-dimensional (3D) model given the temperature profile and the mass loss obtained during well-defined fire scenarios (model presented in Part II of this paper). The measurements were optimised to get accurate data. The data also permit to create a public database on an aeronautical carbon fibre/epoxy composite for fire safety engineering.
Integrative analysis of the Caenorhabditis elegans genome by the modENCODE project.
Gerstein, Mark B; Lu, Zhi John; Van Nostrand, Eric L; Cheng, Chao; Arshinoff, Bradley I; Liu, Tao; Yip, Kevin Y; Robilotto, Rebecca; Rechtsteiner, Andreas; Ikegami, Kohta; Alves, Pedro; Chateigner, Aurelien; Perry, Marc; Morris, Mitzi; Auerbach, Raymond K; Feng, Xin; Leng, Jing; Vielle, Anne; Niu, Wei; Rhrissorrakrai, Kahn; Agarwal, Ashish; Alexander, Roger P; Barber, Galt; Brdlik, Cathleen M; Brennan, Jennifer; Brouillet, Jeremy Jean; Carr, Adrian; Cheung, Ming-Sin; Clawson, Hiram; Contrino, Sergio; Dannenberg, Luke O; Dernburg, Abby F; Desai, Arshad; Dick, Lindsay; Dosé, Andréa C; Du, Jiang; Egelhofer, Thea; Ercan, Sevinc; Euskirchen, Ghia; Ewing, Brent; Feingold, Elise A; Gassmann, Reto; Good, Peter J; Green, Phil; Gullier, Francois; Gutwein, Michelle; Guyer, Mark S; Habegger, Lukas; Han, Ting; Henikoff, Jorja G; Henz, Stefan R; Hinrichs, Angie; Holster, Heather; Hyman, Tony; Iniguez, A Leo; Janette, Judith; Jensen, Morten; Kato, Masaomi; Kent, W James; Kephart, Ellen; Khivansara, Vishal; Khurana, Ekta; Kim, John K; Kolasinska-Zwierz, Paulina; Lai, Eric C; Latorre, Isabel; Leahey, Amber; Lewis, Suzanna; Lloyd, Paul; Lochovsky, Lucas; Lowdon, Rebecca F; Lubling, Yaniv; Lyne, Rachel; MacCoss, Michael; Mackowiak, Sebastian D; Mangone, Marco; McKay, Sheldon; Mecenas, Desirea; Merrihew, Gennifer; Miller, David M; Muroyama, Andrew; Murray, John I; Ooi, Siew-Loon; Pham, Hoang; Phippen, Taryn; Preston, Elicia A; Rajewsky, Nikolaus; Rätsch, Gunnar; Rosenbaum, Heidi; Rozowsky, Joel; Rutherford, Kim; Ruzanov, Peter; Sarov, Mihail; Sasidharan, Rajkumar; Sboner, Andrea; Scheid, Paul; Segal, Eran; Shin, Hyunjin; Shou, Chong; Slack, Frank J; Slightam, Cindie; Smith, Richard; Spencer, William C; Stinson, E O; Taing, Scott; Takasaki, Teruaki; Vafeados, Dionne; Voronina, Ksenia; Wang, Guilin; Washington, Nicole L; Whittle, Christina M; Wu, Beijing; Yan, Koon-Kiu; Zeller, Georg; Zha, Zheng; Zhong, Mei; Zhou, Xingliang; Ahringer, Julie; Strome, Susan; Gunsalus, Kristin C; Micklem, Gos; Liu, X Shirley; Reinke, Valerie; Kim, Stuart K; Hillier, LaDeana W; Henikoff, Steven; Piano, Fabio; Snyder, Michael; Stein, Lincoln; Lieb, Jason D; Waterston, Robert H
2010-12-24
We systematically generated large-scale data sets to improve genome annotation for the nematode Caenorhabditis elegans, a key model organism. These data sets include transcriptome profiling across a developmental time course, genome-wide identification of transcription factor-binding sites, and maps of chromatin organization. From this, we created more complete and accurate gene models, including alternative splice forms and candidate noncoding RNAs. We constructed hierarchical networks of transcription factor-binding and microRNA interactions and discovered chromosomal locations bound by an unusually large number of transcription factors. Different patterns of chromatin composition and histone modification were revealed between chromosome arms and centers, with similarly prominent differences between autosomes and the X chromosome. Integrating data types, we built statistical models relating chromatin, transcription factor binding, and gene expression. Overall, our analyses ascribed putative functions to most of the conserved genome.
Carson, Anne; Troy, Douglas
2007-01-01
Nursing and computer science students and faculty worked with the American Red Cross to investigate the potential for information technology to provide Red Cross disaster services nurses with improved access to accurate community resources in times of disaster. Funded by a national three-year grant, this interdisciplinary partnership led to field testing of an information system to support local community disaster preparedness at seven Red Cross chapters across the United States. The field test results demonstrate the benefits of the technology and the value of interdisciplinary research. The work also created a sustainable learning and research model for the future. This paper describes the collaborative model employed in this interdisciplinary research and exemplifies the benefits to faculty and students of well-timed interdisciplinary and community collaboration. PMID:18600129
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
Global surgery: current evidence for improving surgical care.
Fuller, Jennifer C; Shaye, David A
2017-08-01
The field of global surgery is undergoing rapid transformation, owing to several recent prominent reports positioning it as a cost-effective means of relieving global disease burden. The purpose of this article is to review the recent advances in the field of global surgery. Efforts to grow the global surgical workforce and procedural capacity have focused on innovative methods to increase surgeon training, enhance international collaboration, leverage technology, optimize existing health systems, and safely implement task-sharing. Computer modeling offers a novel means of informing policy to optimize timely access to care, equitably promote health and financial protection, and efficiently grow infrastructure. Tools and checklists have recently been developed to enhance data collection and ensure methodologically rigorous publications to inform planning, benchmark surgical systems, promote accurate modeling, track key health indicators, and promote safety. Creation of institutional partnerships and trainee exchanges can enrich training, stimulate commitment to humanitarian work, and promote the equal exchange of ideas and expertise. The recent body of work creates a strong foundation upon which work toward the goal of universal access to safe, affordable surgical care can be built; however, further collection and analysis of country-specific data is necessary for accurate modeling and outcomes research into the efficacy of policies such as task-sharing is greatly needed.
Simulating flow in karst aquifers at laboratory and sub-regional scales using MODFLOW-CFP
NASA Astrophysics Data System (ADS)
Gallegos, Josue Jacob; Hu, Bill X.; Davis, Hal
2013-12-01
Groundwater flow in a well-developed karst aquifer dominantly occurs through bedding planes, fractures, conduits, and caves created by and/or enlarged by dissolution. Conventional groundwater modeling methods assume that groundwater flow is described by Darcian principles where primary porosity (i.e. matrix porosity) and laminar flow are dominant. However, in well-developed karst aquifers, the assumption of Darcian flow can be questionable. While Darcian flow generally occurs in the matrix portion of the karst aquifer, flow through conduits can be non-laminar where the relation between specific discharge and hydraulic gradient is non-linear. MODFLOW-CFP is a relatively new modeling program that accounts for non-laminar and laminar flow in pipes, like karst caves, within an aquifer. In this study, results from MODFLOW-CFP are compared to those from MODFLOW-2000/2005, a numerical code based on Darcy's law, to evaluate the accuracy that CFP can achieve when modeling flows in karst aquifers at laboratory and sub-regional (Woodville Karst Plain, Florida, USA) scales. In comparison with laboratory experiments, simulation results by MODFLOW-CFP are more accurate than MODFLOW 2005. At the sub-regional scale, MODFLOW-CFP was more accurate than MODFLOW-2000 for simulating field measurements of peak flow at one spring and total discharges at two springs for an observed storm event.
Velasco, Ignacio; Vahdani, Soheil; Ramos, Hector
2017-09-01
Three-dimensional (3D) printing is relatively a new technology with clinical applications, which enable us to create rapid accurate prototype of the selected anatomic region, making it possible to plan complex surgery and pre-bend hardware for individual surgical cases. This study aimed to express our experience with the use of medical rapid prototype (MRP) of the maxillofacial region created by desktop 3D printer and its application in maxillofacial reconstructive surgeries. Three patients with benign mandible tumors were included in this study after obtaining informed consent. All patient's maxillofacial CT scan data was processed by segmentation and isolation software and mandible MRP was printed using our desktop 3D printer. These models were used for preoperative surgical planning and prebending of the reconstruction plate. MRP created by desktop 3D printer is a cost-efficient, quick and easily produced appliance for the planning of reconstructive surgery. It can contribute in patient orientation and helping them in a better understanding of their condition and proposed surgical treatment. It helps surgeons for pre-operative planning in the resection or reconstruction cases and represent an excellent tool in academic setting for residents training. The pre-bended reconstruction plate based on MRP, resulted in decreased surgery time, cost and anesthesia risks on the patients. Key words: 3D printing, medical modeling, rapid prototype, mandibular reconstruction, ameloblastoma.
Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques
2017-07-01
Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
Science Opportunity Analyzer (SOA): Science Planning Made Simple
NASA Technical Reports Server (NTRS)
Streiffert, Barbara A.; Polanskey, Carol A.
2004-01-01
.For the first time at JPL, the Cassini mission to Saturn is using distributed science operations for developing their experiments. Remote scientists needed the ability to: a) Identify observation opportunities; b) Create accurate, detailed designs for their observations; c) Verify that their designs meet their objectives; d) Check their observations against project flight rules and constraints; e) Communicate their observations to other scientists. Many existing tools provide one or more of these functions, but Science Opportunity Analyzer (SOA) has been built to unify these tasks into a single application. Accurate: Utilizes JPL Navigation and Ancillary Information Facility (NAIF) SPICE* software tool kit - Provides high fidelity modeling. - Facilitates rapid adaptation to other flight projects. Portable: Available in Unix, Windows and Linux. Adaptable: Designed to be a multi-mission tool so it can be readily adapted to other flight projects. Implemented in Java, Java 3D and other innovative technologies. Conclusion: SOA is easy to use. It only requires 6 simple steps. SOA's ability to show the same accurate information in multiple ways (multiple visualization formats, data plots, listings and file output) is essential to meet the needs of a diverse, distributed science operations environment.
Croft, Daniel E; van Hemert, Jano; Wykoff, Charles C; Clifton, David; Verhoek, Michael; Fleming, Alan; Brown, David M
2014-01-01
Accurate quantification of retinal surface area from ultra-widefield (UWF) images is challenging due to warping produced when the retina is projected onto a two-dimensional plane for analysis. By accounting for this, the authors sought to precisely montage and accurately quantify retinal surface area in square millimeters. Montages were created using Optos 200Tx (Optos, Dunfermline, U.K.) images taken at different gaze angles. A transformation projected the images to their correct location on a three-dimensional model. Area was quantified with spherical trigonometry. Warping, precision, and accuracy were assessed. Uncorrected, posterior pixels represented up to 79% greater surface area than peripheral pixels. Assessing precision, a standard region was quantified across 10 montages of the same eye (RSD: 0.7%; mean: 408.97 mm(2); range: 405.34-413.87 mm(2)). Assessing accuracy, 50 patients' disc areas were quantified (mean: 2.21 mm(2); SE: 0.06 mm(2)), and the results fell within the normative range. By accounting for warping inherent in UWF images, precise montaging and accurate quantification of retinal surface area in square millimeters were achieved. Copyright 2014, SLACK Incorporated.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Mustafa Sacit; none,; Flanagan, George F.
2014-07-30
An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less
NASA Astrophysics Data System (ADS)
DeGrandpre, K.; Pesicek, J. D.; Lu, Z.
2016-12-01
During the summer of 2014 and the early spring of 2015 two notable increases in seismic activity at Semisopochnoi volcano in the western Aleutian islands were recorded on AVO seismometers on Semisopochnoi and neighboring islands. These seismic swarms did not lead to an eruption. This study employs differential SAR techniques using TerraSAR-X images in conjunction with more accurately relocating the recorded seismic events through simultaneous inversion of event travel times and a three-dimensional velocity model using tomoDD. The interferograms created from the SAR images exhibit surprising coherence and an island wide spatial distribution of inflation that is then used in a Mogi model in order to define the three-dimensional location and volume change required for a source at Semisopochnoi to produce the observed surface deformation. The tomoDD relocations provide a more accurate and realistic three-dimensional velocity model as well as a tighter clustering of events for both swarms that clearly outline a linear seismic void within the larger group of shallow (<10 km) seismicity. While no direct conclusions as to the relationship of these seismic events and the observed surface deformation can be made at this time, these techniques are both complimentary and efficient forms of remotely monitoring volcanic activity that provide much deeper insights into the processes involved without having to risk hazardous or costly field work.
Disease prevention versus data privacy: using landcover maps to inform spatial epidemic models.
Tildesley, Michael J; Ryan, Sadie J
2012-01-01
The availability of epidemiological data in the early stages of an outbreak of an infectious disease is vital for modelers to make accurate predictions regarding the likely spread of disease and preferred intervention strategies. However, in some countries, the necessary demographic data are only available at an aggregate scale. We investigated the ability of models of livestock infectious diseases to predict epidemic spread and obtain optimal control policies in the event of imperfect, aggregated data. Taking a geographic information approach, we used land cover data to predict UK farm locations and investigated the influence of using these synthetic location data sets upon epidemiological predictions in the event of an outbreak of foot-and-mouth disease. When broadly classified land cover data were used to create synthetic farm locations, model predictions deviated significantly from those simulated on true data. However, when more resolved subclass land use data were used, moderate to highly accurate predictions of epidemic size, duration and optimal vaccination and ring culling strategies were obtained. This suggests that a geographic information approach may be useful where individual farm-level data are not available, to allow predictive analyses to be carried out regarding the likely spread of disease. This method can also be used for contingency planning in collaboration with policy makers to determine preferred control strategies in the event of a future outbreak of infectious disease in livestock.
Disease Prevention versus Data Privacy: Using Landcover Maps to Inform Spatial Epidemic Models
Tildesley, Michael J.; Ryan, Sadie J.
2012-01-01
The availability of epidemiological data in the early stages of an outbreak of an infectious disease is vital for modelers to make accurate predictions regarding the likely spread of disease and preferred intervention strategies. However, in some countries, the necessary demographic data are only available at an aggregate scale. We investigated the ability of models of livestock infectious diseases to predict epidemic spread and obtain optimal control policies in the event of imperfect, aggregated data. Taking a geographic information approach, we used land cover data to predict UK farm locations and investigated the influence of using these synthetic location data sets upon epidemiological predictions in the event of an outbreak of foot-and-mouth disease. When broadly classified land cover data were used to create synthetic farm locations, model predictions deviated significantly from those simulated on true data. However, when more resolved subclass land use data were used, moderate to highly accurate predictions of epidemic size, duration and optimal vaccination and ring culling strategies were obtained. This suggests that a geographic information approach may be useful where individual farm-level data are not available, to allow predictive analyses to be carried out regarding the likely spread of disease. This method can also be used for contingency planning in collaboration with policy makers to determine preferred control strategies in the event of a future outbreak of infectious disease in livestock. PMID:23133352
Howell, Bryan; McIntyre, Cameron C
2016-06-01
Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
Learning-based stochastic object models for use in optimizing imaging systems
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
NASA Astrophysics Data System (ADS)
Gonizzi Barsanti, S.; Guidi, G.
2017-02-01
Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; McEntee, Mark; Evanoff, Michael G.; Brennan, Patrick C.
2012-02-01
Aim: This study evaluates the assumption that global impression is created based on low spatial frequency components of posterior-anterior chest radiographs. Background: Expert radiologists precisely and rapidly allocate visual attention on pulmonary nodules chest radiographs. Moreover, the most frequent accurate decisions are produced in the shortest viewing time, thus, the first hundred milliseconds of image perception seems be crucial for correct interpretation. Medical image perception model assumes that during holistic analysis experts extract information based on low spatial frequency (SF) components and creates a mental map of suspicious location for further inspection. The global impression results in flagged regions for detailed inspection with foveal vision. Method: Nine chest experts and nine non-chest radiologists viewed two sets of randomly ordered chest radiographs under 2 timing conditions: (1) 300ms; (2) free search in unlimited time. The same radiographic cases of 25 normal and 25 abnormal digitalized chest films constituted two image sets: low-pass filtered and unfiltered. Subjects were asked to detect nodules and rank confidence level. MRMC ROC DBM analyses were conducted. Results: Experts had improved ROC AUC while high SF components are displayed (p=0.03) or while low SF components were viewed under unlimited time (p=0.02) compared with low SF 300mSec viewings. In contrast, non-chest radiologists showed no significant changes when high SF are displayed under flash conditions compared with free search or while low SF components were viewed under unlimited time compared with flash. Conclusion: The current medical image perception model accurately predicted performance for non-chest radiologists, however chest experts appear to benefit from high SF features during the global impression.
2014-01-01
Background Accurate estimation of parameters of biochemical models is required to characterize the dynamics of molecular processes. This problem is intimately linked to identifying the most informative experiments for accomplishing such tasks. While significant progress has been made, effective experimental strategies for parameter identification and for distinguishing among alternative network topologies remain unclear. We approached these questions in an unbiased manner using a unique community-based approach in the context of the DREAM initiative (Dialogue for Reverse Engineering Assessment of Methods). We created an in silico test framework under which participants could probe a network with hidden parameters by requesting a range of experimental assays; results of these experiments were simulated according to a model of network dynamics only partially revealed to participants. Results We proposed two challenges; in the first, participants were given the topology and underlying biochemical structure of a 9-gene regulatory network and were asked to determine its parameter values. In the second challenge, participants were given an incomplete topology with 11 genes and asked to find three missing links in the model. In both challenges, a budget was provided to buy experimental data generated in silico with the model and mimicking the features of different common experimental techniques, such as microarrays and fluorescence microscopy. Data could be bought at any stage, allowing participants to implement an iterative loop of experiments and computation. Conclusions A total of 19 teams participated in this competition. The results suggest that the combination of state-of-the-art parameter estimation and a varied set of experimental methods using a few datasets, mostly fluorescence imaging data, can accurately determine parameters of biochemical models of gene regulation. However, the task is considerably more difficult if the gene network topology is not completely defined, as in challenge 2. Importantly, we found that aggregating independent parameter predictions and network topology across submissions creates a solution that can be better than the one from the best-performing submission. PMID:24507381
NASA Astrophysics Data System (ADS)
Hochmuth, K.; Gohl, K.; Leitchenkov, G. L.; Sauermilch, I.; Whittaker, J. M.; De Santis, L.; Olivo, E.; Uenzelmann-Neben, G.; Davy, B. W.
2017-12-01
Although the Southern Ocean plays a fundamental role in the global climate and ocean current system, paleo-ocean circulation models of the Southern Ocean suffer from missing boundary conditions. A more accurate representation of the geometry of the seafloor and their dynamics over long time-scales are key for enabling more precise reconstructions of the development of the paleo-currents, the paleo-environment and the Antarctic ice sheets. The accurate parameterisation of these models controls the meaning and implications of regional and global paleo-climate models. The dynamics of ocean currents in proximity of the continental margins is also controlled by the development of the regional seafloor morphology of the conjugate continental shelves, slopes and rises. The reassessment of all available reflection seismic and borehole data from Antarctica as well as its conjugate margins of Australia, New Zealand, South Africa and South America, allows us to create paleobathymetric grids for various time slices during the Cenozoic. Those grids inform us about sediment distribution and volume as well a local sedimentation rates. The earliest targeted time slice of the Eocene/Oligocene Boundary marks a significant turning point towards an icehouse climate. From latest Eocene to earliest Oligocene the Southern Ocean changes fundamentally from a post greenhouse to an icehouse environment with the establishment of a vast continental ice sheet on the Antarctic continent. With the calculated sediment distribution maps, we can evaluate the dynamics of the sedimentary cover as well as the development of structural obstacles such as oceanic plateaus and ridges. The ultimate aim of this project is - as a community based effort - to create paleobathymetric grids at various time slices such as the Mid-Miocene Climatic Optimum and the Pliocene/Pleistocene, and eventually mimic the time steps used within the modelling community. The observation of sediment distribution and local sediment volumes open the door towards more sophisticated paleo-topograpy studies of the Antarctic continent and more detailed studies of the paleo-circulation. Local paleo - water depths at the oceanic gateways or the position of paleo-shelf edges highly influence the regional circulation patterns supporting more elaborated climate models.
Guimarães, João Antonio Matheus; Martin, Murphy P; da Silva, Flávio Ribeiro; Duarte, Maria Eugenia Leite; Cavalcanti, Amanda Dos Santos; Machado, Jamila Alessandra Perini; Mauffrey, Cyril; Rojas, David
2018-06-08
Percutaneous fixation of the acetabulum is a treatment option for select acetabular fractures. Intra-operative fluoroscopy is required, and despite various described imaging strategies, it is debatable as to which combination of fluoroscopic views provides the most accurate and reliable assessment of screw position. Using five synthetic pelvic models, an experimental setup was created in which the anterior acetabular columns were instrumented with screws in five distinct trajectories. Five fluoroscopic images were obtained of each model (Pelvic Inlet, Obturator Oblique, Iliac Oblique, Obturator Oblique/Outlet, and Iliac Oblique/Outlet). The images were presented to 32 pelvic and acetabular orthopaedic surgeons, who were asked to draw two conclusions regarding screw position: (1) whether the screw was intra-articular and (2) whether the screw was intraosseous in its distal course through the bony corridor. In the assessment of screw position relative to the hip joint, accuracy of surgeon's response ranged from 52% (iliac oblique/outlet) to 88% (obturator oblique), with surgeon confidence in the interpretation ranging from 60% (pelvic inlet) to 93% (obturator oblique) (P < 0.0001). In the assessment of intraosseous position of the screw, accuracy of surgeon's response ranged from 40% (obturator oblique/outlet) to 79% (iliac oblique/outlet), with surgeon confidence in the interpretation ranging from 66% (iliac oblique) to 88% (pelvic inlet) (P < 0.0001). The obturator oblique and obturator oblique/outlet views afforded the most accurate and reliable assessment of penetration into the hip joint, and intraosseous position of the screw was most accurately assessed with pelvic inlet and iliac oblique/outlet views. Clinical Question.
NASA Astrophysics Data System (ADS)
Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.
2015-12-01
We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).
Chen, Huipeng; Poulard, David; Forman, Jason; Crandall, Jeff; Panzer, Matthew B
2018-07-04
Evaluating the biofidelity of pedestrian finite element models (PFEM) using postmortem human subjects (PMHS) is a challenge because differences in anthropometry between PMHS and PFEM could limit a model's capability to accurately capture cadaveric responses. Geometrical personalization via morphing can modify the PFEM geometry to match the specific PMHS anthropometry, which could alleviate this issue. In this study, the Total Human Model for Safety (THUMS) PFEM (Ver 4.01) was compared to the cadaveric response in vehicle-pedestrian impacts using geometrically personalized models. The AM50 THUMS PFEM was used as the baseline model, and 2 morphed PFEM were created to the anthropometric specifications of 2 obese PMHS used in a previous pedestrian impact study with a mid-size sedan. The same measurements as those obtained during the PMHS tests were calculated from the simulations (kinematics, accelerations, strains), and biofidelity metrics based on signals correlation (correlation and analysis, CORA) were established to compare the response of the models to the experiments. Injury outcomes were predicted deterministically (through strain-based threshold) and probabilistically (with injury risk functions) and compared with the injuries reported in the necropsy. The baseline model could not accurately capture all aspects of the PMHS kinematics, strain, and injury risks, whereas the morphed models reproduced biofidelic response in terms of trajectory (CORA score = 0.927 ± 0.092), velocities (0.975 ± 0.027), accelerations (0.862 ± 0.072), and strains (0.707 ± 0.143). The personalized THUMS models also generally predicted injuries consistent with those identified during posttest autopsy. The study highlights the need to control for pedestrian anthropometry when validating pedestrian human body models against PMHS data. The information provided in the current study could be useful for improving model biofidelity for vehicle-pedestrian impact scenarios.
NASA Astrophysics Data System (ADS)
Rochford, Meghan; Black, Kenneth; Aleynik, Dmitry; Carpenter, Trevor
2017-04-01
The Scottish Environmental Protection Agency (SEPA) are currently implementing new regulations for consenting developments at new and pre-existing fish farms. Currently, a 15-day current record from multiple depths at one location near the site is required to run DEPOMOD, a depositional model used to determine the depositional footprint of waste material from fish farms, developed by Cromey et al. (2002). The present project involves modifying DEPOMOD to accept data from 3D hydrodynamic models to allow for a more accurate representation of the currents around the farms. Bathymetric data are key boundary conditions for accurate modelling of current velocity data. The aim of the project is to create a script that will use the outputs from FVCOM, a 3D hydrodynamic model developed by Chen et al. (2003), and input them into NewDEPOMOD (a new version of DEPOMOD with more accurately parameterised sediment transport processes) to determine the effect of a fish farm on the surrounding environment. This study compares current velocity data under two scenarios; the first, using interpolated bathymetric data, and the second using bathymetric data collected during a bathymetric echo sounding survey of the site. Theoretically, if the hydrodynamic model is of high enough resolution, the two scenarios should yield relatively similar results. However, the expected result is that the survey data will be of much higher resolution and therefore of better quality, producing more realistic velocity results. The improvement of bathymetric data will also improve sediment transport predictions in NewDEPOMOD. This work will determine the sensitivity of model predictions to bathymetric data accuracy at a range of sites with varying bathymetric complexity and thus give information on the potential costs and benefits of echo sounding survey data inputs. Chen, C., Liu, H. and Beardsley, R.C., 2003. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model: application to coastal ocean and estuaries. Journal of atmospheric and oceanic technology, 20(1), pp.159-186. Cromey, C.J., Nickell, T.D. and Black, K.D., 2002. DEPOMOD—modelling the deposition and biological effects of waste solids from marine cage farms. Aquaculture, 214(1), pp.211-239.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Towards a Generalizable Time Expression Model for Temporal Reasoning in Clinical Notes
Velupillai, Sumithra; Mowery, Danielle L.; Abdelrahman, Samir; Christensen, Lee; Chapman, Wendy W
2015-01-01
Accurate temporal identification and normalization is imperative for many biomedical and clinical tasks such as generating timelines and identifying phenotypes. A major natural language processing challenge is developing and evaluating a generalizable temporal modeling approach that performs well across corpora and institutions. Our long-term goal is to create such a model. We initiate our work on reaching this goal by focusing on temporal expression (TIMEX3) identification. We present a systematic approach to 1) generalize existing solutions for automated TIMEX3 span detection, and 2) assess similarities and differences by various instantiations of TIMEX3 models applied on separate clinical corpora. When evaluated on the 2012 i2b2 and the 2015 Clinical TempEval challenge corpora, our conclusion is that our approach is successful – we achieve competitive results for automated classification, and we identify similarities and differences in TIMEX3 modeling that will be informative in the development of a simplified, general temporal model. PMID:26958265
Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks
NASA Astrophysics Data System (ADS)
Gosangi, Rakesh; Gutierrez-Osuna, Ricardo
2011-09-01
We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Validation of the SWMF Magnetosphere: Fields and Particles
NASA Astrophysics Data System (ADS)
Welling, D. T.; Ridley, A. J.
2009-05-01
The Space Weather Modeling Framework has been developed at the University of Michigan to allow many independent space environment numerical models to be executed simultaneously and coupled together to create a more accurate, all-encompassing system. This work explores the capabilities of the framework when using the BATS-R-US MHD code, Rice Convection Model (RCM), the Ridley Ionosphere Model (RIM), and the Polar Wind Outflow Model (PWOM). Ten space weather events, ranging from quiet to extremely stormy periods, are modeled by the framework. All simulations are executed in a manner that mimics an operational environment where fewer resources are available and predictions are required in a timely manner. The results are compared against in-situ measurements of magnetic fields from GOES, Polar, Geotail, and Cluster satellites as well as MPA particle measurements from the LANL geosynchronous spacecraft. Various metrics are calculated to quantify performance. Results when using only two to all four components are compared to evaluate the increase in performance as new physics are included in the system.
Patient-specific finite element modeling for femoral bone augmentation
Basafa, Ehsan; Armiger, Robert S.; Kutzer, Michael D.; Belkoff, Stephen M.; Mears, Simon C.; Armand, Mehran
2015-01-01
The aim of this study was to provide a fast and accurate finite element (FE) modeling scheme for predicting bone stiffness and strength suitable for use within the framework of a computer-assisted osteoporotic femoral bone augmentation surgery system. The key parts of the system, i.e. preoperative planning and intraoperative assessment of the augmentation, demand the finite element model to be solved and analyzed rapidly. Available CT scans and mechanical testing results from nine pairs of osteoporotic femur bones, with one specimen from each pair augmented by polymethylmethacrylate (PMMA) bone cement, were used to create FE models and compare the results with experiments. Correlation values of R2 = 0.72–0.95 were observed between the experiments and FEA results which, combined with the fast model convergence (~3 min for ~250,000 degrees of freedom), makes the presented modeling approach a promising candidate for the intended application of preoperative planning and intraoperative assessment of bone augmentation surgery. PMID:23375663
Analysis and modeling of photomask edge effects for 3D geometries and the effect on process window
NASA Astrophysics Data System (ADS)
Miller, Marshal A.; Neureuther, Andrew R.
2009-03-01
Simulation was used to explore boundary layer models for 1D and 2D patterns that would be appropriate for fast CAD modeling of physical effects during design. FDTD simulation was used to compare rigorous thick mask modeling to a thin mask approximation (TMA). When features are large, edges can be viewed as independent and modeled as separate from one another, but for small mask features, edges experience cross-talk. For attenuating phase-shift masks, interaction distances as large as 150nm were observed. Polarization effects are important for accurate EMF models. Due to polarization effects, the edge perturbations in line ends become different compared to a perpendicular edge. For a mask designed to be real, the 90o transmission created at edges produces an asymmetry through focus, which is also polarization dependent. Thick mask fields are calculated using TEMPEST and Panoramic Technologies software. Fields are then analyzed in the near field and on wafer CDs to examine deviations from TMA.
Evaluation of a new disposable silicon limbal relaxing incision knife by experienced users.
Albanese, John; Dugue, Geoffrey; Parvu, Valentin; Bajart, Ann M; Lee, Edwin
2009-12-21
Previous research has suggested that the silicon BD Atomic Edge knife has superior performance characteristics when compared to a metal knife and performance similar to diamond knife when making various incisions. This study was designed to determine whether a silicon accurate depth knife has equivalent performance characteristics when compared to a diamond limbal relaxing incision (LRI) knife and superior performance characteristics when compared to a steel accurate depth knife when creating limbal relaxing incision. Sixty-five ophthalmic surgeons with limbal relaxing incision experience created limbal relaxing incisions in ex-vivo porcine eyes with silicon and steel accurate depth knives and diamond LRI knives. The ophthalmic surgeons rated multiple performance characteristics of the knives on Visual Analog Scales. The observed differences between the silicon knife and diamond knife were found to be insignificant. The mean ratio between the performance of the silicon knife and the diamond knife was shown to be greater than 90% (with 95% confidence). The silicon knife's mean performance was significantly higher than the performance of the steel knife for all characteristics. (p-value < .05) For experienced users, the silicon accurate depth knife was found to be equivalent in performance to the diamond LRI knife and superior to the steel accurate depth knife when making limbal relaxing incisions in ex vivo porcine eyes. Disposable silicon LRI knives may be an alternative to diamond LRI knives.
Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang
1999-01-01
Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230
Smola, Matthew J; Rice, Greggory M; Busan, Steven; Siegfried, Nathan A; Weeks, Kevin M
2015-11-01
Selective 2'-hydroxyl acylation analyzed by primer extension (SHAPE) chemistries exploit small electrophilic reagents that react with 2'-hydroxyl groups to interrogate RNA structure at single-nucleotide resolution. Mutational profiling (MaP) identifies modified residues by using reverse transcriptase to misread a SHAPE-modified nucleotide and then counting the resulting mutations by massively parallel sequencing. The SHAPE-MaP approach measures the structure of large and transcriptome-wide systems as accurately as can be done for simple model RNAs. This protocol describes the experimental steps, implemented over 3 d, that are required to perform SHAPE probing and to construct multiplexed SHAPE-MaP libraries suitable for deep sequencing. Automated processing of MaP sequencing data is accomplished using two software packages. ShapeMapper converts raw sequencing files into mutational profiles, creates SHAPE reactivity plots and provides useful troubleshooting information. SuperFold uses these data to model RNA secondary structures, identify regions with well-defined structures and visualize probable and alternative helices, often in under 1 d. SHAPE-MaP can be used to make nucleotide-resolution biophysical measurements of individual RNA motifs, rare components of complex RNA ensembles and entire transcriptomes.
NASA Astrophysics Data System (ADS)
Sankovich, Vladimir
1998-12-01
The goal of this paper is to build a consistent physical theory of the dynamics of the bat-ball interaction. This requires creating realistic models for both the softball bat and the softball. Some of the features of these models are known phenomenologically, from experiments conducted in our laboratory, others will be introduced and computed from first principles here for the first time. Both interacting objects are treated from the viewpoint of the theory of elasticity, and it is shown how a computer can be used to accurately calculate all the relevant characteristics of batball collisions. It is shown also how the major elastic parameters of the material constituting the interior of a softball can be determined using the existing experimental data. These parameters, such as the Young's modulus, the Poisson ratio and the damping coefficient are vital for the accurate description of the ball's dynamics. We are demonstrating how the existing theories of the elastic behavior of solid bars and hollow shells can be augmented to simplify the resulting equations and make the subsequent computer analysis feasible. The standard system of fourth-order PDE's is reduced to a system of the second order, because of the inclusion of the usually ignored effects of the shear forces in the bat.
Characterising Tidal Flow Within AN Energetic Tidal Environment
NASA Astrophysics Data System (ADS)
Neill, S. P.; Goward Brown, A.; Lewis, M. J.
2016-02-01
The Pentland Firth is a highly energetic and complex tidal strait separating the north of Scotland with the Orkney Islands and is a key location for tidal energy exploitation. Topographic features including islands and headlands, combined with bathymetric complexities within the Pentland Firth create turbulent hydrodynamic flows which are difficult to observe. Site selection in tidal energy environments historically focuses on tidal current magnitude. Without consideration for the more complex hydrodynamics of tidal energy environments tidal energy developers may miss the opportunity to tune their devices or create environment specific tidal energy converters in order to harness the greatest potential from site. Fully characterising these tidal energy environments ensures economic energy extraction. Understanding the interaction of energy extraction with the environment will reduce uncertainty in site selection and allow mitigation of any potential environmental concerns. We apply the 3D ROMS model to the Pentland Firth with the aim of resolving uncertainties within tidal energy resource assessment. Flow magnitudes and directions are examined with a focus on tidal phasing and asymmetry and application to sediment dynamics. Using the ROMS model, it is possible to determine the extent to which the tidal resource varies temporally and spatially with tidal energy extraction. Accurately modelling the tidal dynamics within this environment ensures that potential consequences of tidal energy extraction on the surrounding environment are better understood.
Crop calendars for the US, USSR, and Canada in support of the early warning project
NASA Technical Reports Server (NTRS)
Hodges, T.; Sestak, M. L.; Trenchard, M. H. (Principal Investigator)
1980-01-01
New crop calendars are produced for U.S. regions where several years of periodic growth stage observations are available on a CRD basis. Preexisting crop calendars from the LACIE are also collected as are U.S. crop calendars currently being created for the Foreign Commodities Production Forecast project. For the U.S.S.R. and Canada, no new crop calendars are created because no new data are available. Instead, LACIE crop calendars are compared against simulated normal daily temperatures and against the Robertson wheat and Williams barley phenology models run on the simulated normal temperatures. Severe inconsistencies are noted and discussed. For the U.S.S.R., spring and fall planting dates can probably be estimated accurately from satellite or meteorological data. For the starter model problem, the Feyerherm spring wheat model is recommended for spring planted small grains, and the results of an analysis are presented. For fall planted small grains, use of normal planting dates supplemented by spectral observation of an early stage is recommended. The importance of nonmeteorological factors as they pertain to meteorological factors in determining fall planting is discussed. Crop calendar data available at the Johnson Space Center for the U.S., U.S.S.R., Canada, and other countries are inventoried.
A fuzzy set preference model for market share analysis
NASA Technical Reports Server (NTRS)
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
Wijenayake, Udaya; Park, Soon-Yong
2017-01-01
Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Pediatric laryngeal simulator using 3D printed models: A novel technique.
Kavanagh, Katherine R; Cote, Valerie; Tsui, Yvonne; Kudernatsch, Simon; Peterson, Donald R; Valdez, Tulio A
2017-04-01
Simulation to acquire and test technical skills is an essential component of medical education and residency training in both surgical and nonsurgical specialties. High-quality simulation education relies on the availability, accessibility, and reliability of models. The objective of this work was to describe a practical pediatric laryngeal model for use in otolaryngology residency training. Ideally, this model would be low-cost, have tactile properties resembling human tissue, and be reliably reproducible. Pediatric laryngeal models were developed using two manufacturing methods: direct three-dimensional (3D) printing of anatomical models and casted anatomical models using 3D-printed molds. Polylactic acid, acrylonitrile butadiene styrene, and high-impact polystyrene (HIPS) were used for the directly printed models, whereas a silicone elastomer (SE) was used for the casted models. The models were evaluated for anatomic quality, ease of manipulation, hardness, and cost of production. A tissue likeness scale was created to validate the simulation model. Fleiss' Kappa rating was performed to evaluate interrater agreement, and analysis of variance was performed to evaluate differences among the materials. The SE provided the most anatomically accurate models, with the tactile properties allowing for surgical manipulation of the larynx. Direct 3D printing was more cost-effective than the SE casting method but did not possess the material properties and tissue likeness necessary for surgical simulation. The SE models of the pediatric larynx created from a casting method demonstrated high quality anatomy, tactile properties comparable to human tissue, and easy manipulation with standard surgical instruments. Their use in a reliable, low-cost, accessible, modular simulation system provides a valuable training resource for otolaryngology residents. N/A. Laryngoscope, 127:E132-E137, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
CD-SEM real time bias correction using reference metrology based modeling
NASA Astrophysics Data System (ADS)
Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.
2018-03-01
Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.
Essayed, Walid I; Unadkat, Prashin; Hosny, Ahmed; Frisken, Sarah; Rassi, Marcio S; Mukundan, Srinivasan; Weaver, James C; Al-Mefty, Ossama; Golby, Alexandra J; Dunn, Ian F
2018-03-02
OBJECTIVE Endoscopic endonasal approaches are increasingly performed for the surgical treatment of multiple skull base pathologies. Preventing postoperative CSF leaks remains a major challenge, particularly in extended approaches. In this study, the authors assessed the potential use of modern multimaterial 3D printing and neuronavigation to help model these extended defects and develop specifically tailored prostheses for reconstructive purposes. METHODS Extended endoscopic endonasal skull base approaches were performed on 3 human cadaveric heads. Preprocedure and intraprocedure CT scans were completed and were used to segment and design extended and tailored skull base models. Multimaterial models with different core/edge interfaces were 3D printed for implantation trials. A novel application of the intraoperative landmark acquisition method was used to transfer the navigation, helping to tailor the extended models. RESULTS Prostheses were created based on preoperative and intraoperative CT scans. The navigation transfer offered sufficiently accurate data to tailor the preprinted extended skull base defect prostheses. Successful implantation of the skull base prostheses was achieved in all specimens. The progressive flexibility gradient of the models' edges offered the best compromise for easy intranasal maneuverability, anchoring, and structural stability. Prostheses printed based on intraprocedure CT scans were accurate in shape but slightly undersized. CONCLUSIONS Preoperative 3D printing of patient-specific skull base models is achievable for extended endoscopic endonasal surgery. The careful spatial modeling and the use of a flexibility gradient in the design helped achieve the most stable reconstruction. Neuronavigation can help tailor preprinted prostheses.
Modelling surface-water depression storage in a Prairie Pothole Region
Hay, Lauren E.; Norton, Parker A.; Viger, Roland; Markstrom, Steven; Regan, R. Steven; Vanderhoof, Melanie
2018-01-01
In this study, the Precipitation-Runoff Modelling System (PRMS) was used to simulate changes in surface-water depression storage in the 1,126-km2 Upper Pipestem Creek basin located within the Prairie Pothole Region of North Dakota, USA. The Prairie Pothole Region is characterized by millions of small water bodies (or surface-water depressions) that provide numerous ecosystem services and are considered an important contribution to the hydrologic cycle. The Upper Pipestem PRMS model was extracted from the U.S. Geological Survey's (USGS) National Hydrologic Model (NHM), developed to support consistent hydrologic modelling across the conterminous United States. The Geospatial Fabric database, created for the USGS NHM, contains hydrologic model parameter values derived from datasets that characterize the physical features of the entire conterminous United States for 109,951 hydrologic response units. Each hydrologic response unit in the Geospatial Fabric was parameterized using aggregated surface-water depression area derived from the National Hydrography Dataset Plus, an integrated suite of application-ready geospatial datasets. This paper presents a calibration strategy for the Upper Pipestem PRMS model that uses normalized lake elevation measurements to calibrate the parameters influencing simulated fractional surface-water depression storage. Results indicate that inclusion of measurements that give an indication of the change in surface-water depression storage in the calibration procedure resulted in accurate changes in surface-water depression storage in the water balance. Regionalized parameterization of the USGS NHM will require a proxy for change in surface-storage to accurately parameterize surface-water depression storage within the USGS NHM.
Predicting early cognitive decline in newly-diagnosed Parkinson's patients: A practical model.
Hogue, Olivia; Fernandez, Hubert H; Floden, Darlene P
2018-06-19
To create a multivariable model to predict early cognitive decline among de novo patients with Parkinson's disease, using brief, inexpensive assessments that are easily incorporated into clinical flow. Data for 351 drug-naïve patients diagnosed with idiopathic Parkinson's disease were obtained from the Parkinson's Progression Markers Initiative. Baseline demographic, disease history, motor, and non-motor features were considered as candidate predictors. Best subsets selection was used to determine the multivariable baseline symptom profile that most accurately predicted individual cognitive decline within three years. Eleven per cent of the sample experienced cognitive decline. The final logistic regression model predicting decline included five baseline variables: verbal memory retention, right-sided bradykinesia, years of education, subjective report of cognitive impairment, and REM behavior disorder. Model discrimination was good (optimism-adjusted concordance index = .749). The associated nomogram provides a tool to determine individual patient risk of meaningful cognitive change in the early stages of the disease. Through the consideration of easily-implemented or routinely-gathered assessments, we have identified a multidimensional baseline profile and created a convenient, inexpensive tool to predict cognitive decline in the earliest stages of Parkinson's disease. The use of this tool would generate prediction at the individual level, allowing clinicians to tailor medical management for each patient and identify at-risk patients for clinical trials aimed at disease modifying therapies. Copyright © 2018. Published by Elsevier Ltd.
Basu, Sanjay; Landon, Bruce E; Song, Zirui; Bitton, Asaf; Phillips, Russell S
2015-02-01
Primary care practice transformations require tools for policymakers and practice managers to understand the financial implications of workforce and reimbursement changes. To create a simulation model to understand how practice utilization, revenues, and expenses may change in the context of workforce and financing changes. We created a simulation model estimating clinic-level utilization, revenues, and expenses using user-specified or public input data detailing practice staffing levels, salaries and overhead expenditures, patient characteristics, clinic workload, and reimbursements. We assessed whether the model could accurately estimate clinic utilization, revenues, and expenses across the nation using labor compensation, medical expenditure, and reimbursements databases, as well as cost and revenue data from independent practices of varying size. We demonstrated the model's utility in a simulation of how utilization, revenue, and expenses would change after hiring a nurse practitioner (NP) compared with hiring a part-time physician. Modeled practice utilization and revenue closely matched independent national utilization and reimbursement data, disaggregated by patient age, sex, race/ethnicity, insurance status, and ICD diagnostic group; the model was able to estimate independent revenue and cost estimates, with highest accuracy among larger practices. A demonstration analysis revealed that hiring an NP to work independently with a subset of patients diagnosed with diabetes or hypertension could increase net revenues, if NP visits involve limited MD consultation or if NP reimbursement rates increase. A model of utilization, revenue, and expenses in primary care practices may help policymakers and managers understand the implications of workforce and financing changes.
Numerical Modeling of the Photothermal Processing for Bubble Forming around Nanowire in a Liquid
Chaari, Anis; Giraud-Moreau, Laurence
2014-01-01
An accurate computation of the temperature is an important factor in determining the shape of a bubble around a nanowire immersed in a liquid. The study of the physical phenomenon consists in solving a photothermic coupled problem between light and nanowire. The numerical multiphysic model is used to study the variations of the temperature and the shape of the created bubble by illumination of the nanowire. The optimization process, including an adaptive remeshing scheme, is used to solve the problem through a finite element method. The study of the shape evolution of the bubble is made taking into account the physical and geometrical parameters of the nanowire. The relation between the sizes and shapes of the bubble and nanowire is deduced. PMID:24795538
The Partisan Brain: An Identity-Based Model of Political Belief.
Van Bavel, Jay J; Pereira, Andrea
2018-03-01
Democracies assume accurate knowledge by the populace, but the human attraction to fake and untrustworthy news poses a serious problem for healthy democratic functioning. We articulate why and how identification with political parties - known as partisanship - can bias information processing in the human brain. There is extensive evidence that people engage in motivated political reasoning, but recent research suggests that partisanship can alter memory, implicit evaluation, and even perceptual judgments. We propose an identity-based model of belief for understanding the influence of partisanship on these cognitive processes. This framework helps to explain why people place party loyalty over policy, and even over truth. Finally, we discuss strategies for de-biasing information processing to help to create a shared reality across partisan divides. Copyright © 2018 Elsevier Ltd. All rights reserved.
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
Multiscale solute transport upscaling for a three-dimensional hierarchical porous medium
NASA Astrophysics Data System (ADS)
Zhang, Mingkan; Zhang, Ye
2015-03-01
A laboratory-generated hierarchical, fully heterogeneous aquifer model (FHM) provides a reference for developing and testing an upscaling approach that integrates large-scale connectivity mapping with flow and transport modeling. Based on the FHM, three hydrostratigraphic models (HSMs) that capture lithological (static) connectivity at different resolutions are created, each corresponding to a sedimentary hierarchy. Under increasing system lnK variances (0.1, 1.0, 4.5), flow upscaling is first conducted to calculate equivalent hydraulic conductivity for individual connectivity (or unit) of the HSMs. Given the computed flow fields, an instantaneous, conservative tracer test is simulated by all models. For the HSMs, two upscaling formulations are tested based on the advection-dispersion equation (ADE), implementing space versus time-dependent macrodispersivity. Comparing flow and transport predictions of the HSMs against those of the reference model, HSMs capturing connectivity at increasing resolutions are more accurate, although upscaling errors increase with system variance. Results suggest: (1) by explicitly modeling connectivity, an enhanced degree of freedom in representing dispersion can improve the ADE-based upscaled models by capturing non-Fickian transport of the FHM; (2) when connectivity is sufficiently resolved, the type of data conditioning used to model transport becomes less critical. Data conditioning, however, is influenced by the prediction goal; (3) when aquifer is weakly-to-moderately heterogeneous, the upscaled models adequately capture the transport simulation of the FHM, despite the existence of hierarchical heterogeneity at smaller scales. When aquifer is strongly heterogeneous, the upscaled models become less accurate because lithological connectivity cannot adequately capture preferential flows; (4) three-dimensional transport connectivities of the hierarchical aquifer differ quantitatively from those analyzed for two-dimensional systems. This article was corrected on 7 MAY 2015. See the end of the full text for details.
Point process models for localization and interdependence of punctate cellular structures.
Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F
2016-07-01
Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Forecasting the spatial transmission of influenza in the United States.
Pei, Sen; Kandula, Sasikiran; Yang, Wan; Shaman, Jeffrey
2018-03-13
Recurrent outbreaks of seasonal and pandemic influenza create a need for forecasts of the geographic spread of this pathogen. Although it is well established that the spatial progression of infection is largely attributable to human mobility, difficulty obtaining real-time information on human movement has limited its incorporation into existing infectious disease forecasting techniques. In this study, we develop and validate an ensemble forecast system for predicting the spatiotemporal spread of influenza that uses readily accessible human mobility data and a metapopulation model. In retrospective state-level forecasts for 35 US states, the system accurately predicts local influenza outbreak onset,-i.e., spatial spread, defined as the week that local incidence increases above a baseline threshold-up to 6 wk in advance of this event. In addition, the metapopulation prediction system forecasts influenza outbreak onset, peak timing, and peak intensity more accurately than isolated location-specific forecasts. The proposed framework could be applied to emergent respiratory viruses and, with appropriate modifications, other infectious diseases.
Measurements of continuum lowering in solid-density plasmas created from elements and compounds
Ciricosta, O.; Vinko, S. M.; Barbrel, B.; ...
2016-05-23
The effect of a dense plasma environment on the energy levels of an embedded ion is usually described in terms of the lowering of its continuum level. For strongly coupled plasmas, the phenomenon is intimately related to the equation of state; hence, an accurate treatment is crucial for most astrophysical and inertial-fusion applications, where the case of plasma mixtures is of particular interest. In this study, we present an experiment showing that the standard density-dependent analytical models are inadequate to describe solid-density plasmas at the temperatures studied, where the reduction of the binding energies for a given species is unaffectedmore » by the different plasma environment (ion density) in either the element or compounds of that species, and can be accurately estimated by calculations only involving the energy levels of an isolated neutral atom. Lastly, the results have implications for the standard approaches to the equation of state calculations.« less
Contaminant transport in wetland flows with bulk degradation and bed absorption
NASA Astrophysics Data System (ADS)
Wang, Ping; Chen, G. Q.
2017-09-01
Ecological degradation and absorption are ubiquitous and exert considerable influence on the contaminant transport in natural and constructed wetland flows. It creates an increased demand on models to accurately characterize the spatial concentration distribution of the transport process. This work extends a method of spatial concentration moments by considering the non-uniform longitudinal solute displacements along the vertical direction, and analytically determines the spatial concentration distribution in the very initial stage since source release with effects of bulk degradation and bed absorption. The present method is demonstrated to bear a more accurate prediction especially in the initial stage through convergence analysis of Hermite polynomials. Results reveal that contaminant cloud shows to be more contracted and reformed by bed absorption with increasing damping factor of wetland flows. Tremendous vertical concentration variation especially in the downstream of the contaminant cloud remains great even at asymptotic large times. Spatial concentration evolution by the extended method other than the mean by previous studies is potential for various implements associated with contaminant transport with strict environmental standards.
Mehmood, Irfan; Ejaz, Naveed; Sajjad, Muhammad; Baik, Sung Wook
2013-10-01
The objective of the present study is to explore prioritization methods in diagnostic imaging modalities to automatically determine the contents of medical images. In this paper, we propose an efficient prioritization of brain MRI. First, the visual perception of the radiologists is adapted to identify salient regions. Then this saliency information is used as an automatic label for accurate segmentation of brain lesion to determine the scientific value of that image. The qualitative and quantitative results prove that the rankings generated by the proposed method are closer to the rankings created by radiologists. Copyright © 2013 Elsevier Ltd. All rights reserved.
Virtual Reality Simulation of the Effects of Microgravity in Gastrointestinal Physiology
NASA Technical Reports Server (NTRS)
Compadre, Cesar M.
1998-01-01
The ultimate goal of this research is to create an anatomically accurate three-dimensional (3D) simulation model of the effects of microgravity in gastrointestinal physiology and to explore the role that such changes may have in the pharmacokinetics of drugs given to the space crews for prevention or therapy. To accomplish this goal the specific aims of this research are: 1) To generate a complete 3-D reconstructions of the human GastroIntestinal (GI) tract of the male and female Visible Humans. 2) To develop and implement time-dependent computer algorithms to simulate the GI motility using the above 3-D reconstruction.
The effect of time until surgical intervention on survival in dogs with secondary septic peritonitis
Bush, Maxwell; Carno, Margaret A.; St. Germaine, Lindsay; Hoffmann, Daniel E.
2016-01-01
This retrospective study examined the effect of time to intervention on outcome in cases of dogs with secondary septic peritonitis, and also searched for other potential prognostic factors. The medical records of 55 dogs were reviewed. No association was found between outcome and the time from hospital admission to surgical source control. However, several other factors were found to influence survival, including: age, needing vasopressors, lactate, pre-operative packed cell volume, serum alkaline phosphatase, serum total bilirubin, and post-operative serum albumin. These values were then used to create accurate pre- and post-operative survival prediction models. PMID:27928174
Propagation of self-localized Q -ball solitons in the 3He universe
NASA Astrophysics Data System (ADS)
Autti, S.; Heikkinen, P. J.; Volovik, G. E.; Zavjalov, V. V.; Eltsov, V. B.
2018-01-01
In relativistic quantum field theories, compact objects of interacting bosons can become stable owing to conservation of an additive quantum number Q . Discovering such Q balls propagating in the universe would confirm supersymmetric extensions of the standard model and may shed light on the mysteries of dark matter, but no unambiguous experimental evidence exists. We have created long-lived Q -ball solitons in superfluid 3He, where the role of the Q ball is played by a Bose-Einstein condensate of magnon quasiparticles. The principal qualitative attribute of a Q ball is observed experimentally: its propagation in space together with the self-created potential trap. Additionally, we show that this system allows for a quantitatively accurate representation of the Q -ball Hamiltonian. Our Q ball belongs to the class of the Friedberg-Lee-Sirlin Q balls with an additional neutral field ζ , which is provided by the orbital part of the Nambu-Goldstone mode. Multiple Q balls can be created in the experiment, and we have observed collisions between them. This set of features makes the magnon condensates in superfluid 3He a versatile platform for studies of Q -ball dynamics and interactions in three spatial dimensions.
NASA Astrophysics Data System (ADS)
Smekens, J.; Clarke, A. B.; De'Michieli Vitturi, M.; Moore, G. M.
2012-12-01
Mt. Semeru is one of the most active explosive volcanoes on the island of Java in Indonesia. The current eruption style consists of small but frequent explosions and/or gas releases (several times a day) accompanied by continuous lava effusion that sporadically produces block-and-ash flows down the SE flank of the volcano. Semeru presents a unique opportunity to investigate the magma ascent conditions that produce this kind of persistent periodic behavior and the coexistence of explosive and effusive eruptions. In this work we use DOMEFLOW, a 1.5D transient isothermal numerical model, to investigate the dynamics of lava extrusion at Semeru. Petrologic observations from tephra and ballistic samples collected at the summit help us constrain the initial conditions of the system. Preliminary model runs produced periodic lava extrusion and pulses of gas release at the vent, with a cycle period on the order of hours, even though a steady magma supply rate was prescribed at the bottom of the conduit. Enhanced shallow permeability implemented in the model appears to create a dense plug in the shallow subsurface, which in turn plays a critical role in creating and controlling the observed periodic behavior. We measured SO2 fluxes just above the vent, using a custom UV imaging system. The device consists of two high-sensitivity CCD cameras with narrow UV filters centered at 310 and 330 nm, and a USB2000+ spectrometer for calibration and distance correction. The method produces high-frequency flux series with an accurate determination of the wind speed and plume geometry. The model results, when combined with gas measurements, and measurements of sulfur in both the groundmass and melt inclusions in eruptive products, could be used to create a volatile budget of the system. Furthermore, a well-calibrated model of the system will ultimately allow the characteristic periodicity and corresponding gas flux to be used as a proxy for magma supply rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, V; Nguyen, D; Tran, A
Purpose: To develop and clinically implement 4π radiotherapy, an inverse optimization platform that maximally utilizes non-coplanar intensity modulated radiotherapy (IMRT) beams to significantly improve critical organ sparing. Methods: A 3D scanner was used to digitize the human and phantom subject surfaces, which were positioned in the computer assisted design (CAD) model of a TrueBeam machine to create a virtual geometrical model, based on which, the feasible beam space was calculated for different tumor locations. Beamlets were computed for all feasible beams using convolution/superposition. A column generation algorithm was employed to optimize patient specific beam orientations and fluence maps. Optimal routingmore » through all selected beams were calculated by a level set method. The resultant plans were converted to XML files and delivered to phantoms in the TrueBeam developer mode. Finally, 4π plans were recomputed in Eclipse and manually delivered to recurrent GBM patients. Results: Compared to IMRT utilizing manually selected beams and volumetric modulated arc therapy plans, markedly improved dosimetry was observed using 4π for the brain, head and neck, liver, lung, and prostate patients. The improvements were due to significantly improved conformality and reduced high dose spillage to organs mediolateral to the PTV. The virtual geometrical model was experimentally validated. Safety margins with 99.9% confidence in collision avoidance were included to the model based model accuracy estimates determined via 300 physical machine to phantom distance measurements. Automated delivery in the developer mode was completed in 10 minutes and collision free. Manual 4 π treatment on the GBM cases resulted in significant brainstem sparing and took 35–45 minutes including multiple images, which showed submillimeter cranial intrafractional motion. Conclusion: The mathematical modeling utilized in 4π is accurate to create and guide highly complex non-coplanar IMRT treatments that consistently and significantly outperform human-operator-created plans. Deliverability of such plans is clinically demonstrated. This work is funded by Varian Medical Systems and the NSF Graduate Research Fellowship DGE-1144087.« less
Shape reconstruction of irregular bodies with multiple complementary data sources
NASA Astrophysics Data System (ADS)
Kaasalainen, M.; Viikinkoski, M.; Carry, B.; Durech, J.; Lamy, P.; Jorda, L.; Marchis, F.; Hestroffer, D.
2011-10-01
Irregularly shaped bodies with at most partial in situ data are a particular challenge for shape reconstruction and mapping. We have created an inversion algorithm and software package for complementary data sources, with which it is possible to create shape and spin models with feature details even when only groundbased data are available. The procedure uses photometry, adaptive optics or other images, occultation timings, and interferometry as main data sources, and we are extending it to include range-Doppler radar and thermal infrared data as well. The data sources are described as generalized projections in various observable spaces [2], which allows their uniform handling with essentially the same techniques, making the addition of new data sources inexpensive in terms of computation time or software development. We present a generally applicable shape support that can be automatically used for all surface types, including strongly nonconvex or non-starlike shapes. New models of Kleopatra (from photometry, adaptive optics, and interferometry) and Hermione are examples of this approach. When using adaptive optics images, the main information from these is extracted from the limb and terminator contours that can be determined much more accurately than the image pixel brightnesses that inevitably contain large errors for most targets. We have shown that the contours yield a wealth of information independent of the scattering properties of the surface [3]. Their use also facilitates a very fast and robustly converging algorithm. An important concept in the inversion is the optimal weighting of the various data modes. We have developed a mathematicallly rigorous scheme for this purpose. The resulting maximum compatibility estimate [3], a multimodal generalization of the maximum likelihood estimate, ensures that the actual information content of each source is properly taken into account, and that the resolution scale of the ensuing model can be reliably estimated. We have applied our procedure to several asteroids, and the ground truth from the Rosetta/Lutetia flyby confirmed the ability of the approach to recover shape details [1] (see also Carry et al., this meeting). We have created a general flyby-version of the procedure to construct full models of planetary targets for which probe images are only available of a part of the surface (a typical setup for many planetary missions). We have successfully combined flyby images with photometry (Steins [4]) and adaptive optics images (Lutetia); the portion of the surface accurately determined by the flyby constrains the shape solution of the "dark side" efficiently.
Radiative Transfer Modeling in Proto-planetary Disks
NASA Astrophysics Data System (ADS)
Kasper, David; Jang-Condell, Hannah; Kloster, Dylan
2016-01-01
Young Stellar Objects (YSOs) are rich astronomical research environments. Planets form in circumstellar disks of gas and dust around YSOs. With ever increasing capabilities of the observational instruments designed to look at these proto-planetary disks, most notably GPI, SPHERE, and ALMA, more accurate interfaces must be made to connect modeling of the disks with observation. PaRTY (Parallel Radiative Transfer in YSOs) is a code developed previously to model the observable density and temperature structure of such a disk by self-consistently calculating the structure of the disk based on radiative transfer physics. We present upgrades we are implementing to the PaRTY code to improve its accuracy and flexibility. These upgrades include: creating a two-sided disk model, implementing a spherical coordinate system, and implementing wavelength-dependent opacities. These upgrades will address problems in the PaRTY code of infinite optical thickness, calculation under/over-resolution, and wavelength-independent photon penetration depths, respectively. The upgraded code will be used to better model disk perturbations resulting from planet formation.
Development of a Training Model for Laparoscopic Common Bile Duct Exploration
Rodríguez, Omaira; Benítez, Gustavo; Sánchez, Renata; De la Fuente, Liliana
2010-01-01
Background: Training and experience of the surgical team are fundamental for the safety and success of complex surgical procedures, such as laparoscopic common bile duct exploration. Methods: We describe an inert, simple, very low-cost, and readily available training model. Created using a “black box” and basic medical and surgical material, it allows training in the fundamental steps necessary for laparoscopic biliary tract surgery, namely, (1) intraoperative cholangiography, (2) transcystic exploration, and (3) laparoscopic choledochotomy, and t-tube insertion. Results: The proposed model has allowed for the development of the skills necessary for partaking in said procedures, contributing to its development and diminishing surgery time as the trainee advances down the learning curve. Further studies are directed towards objectively determining the impact of the model on skill acquisition. Conclusion: The described model is simple and readily available allowing for accurate reproduction of the main steps and maneuvers that take place during laparoscopic common bile duct exploration, with the purpose of reducing failure and complications. PMID:20529526
Solar EUV irradiance for space weather applications
NASA Astrophysics Data System (ADS)
Viereck, R. A.
2015-12-01
Solar EUV irradiance is an important driver of space weather models. Large changes in EUV and x-ray irradiances create large variability in the ionosphere and thermosphere. Proxies such as the F10.7 cm radio flux, have provided reasonable estimates of the EUV flux but as the space weather models become more accurate and the demands of the customers become more stringent, proxies are no longer adequate. Furthermore, proxies are often provided only on a daily basis and shorter time scales are becoming important. Also, there is a growing need for multi-day forecasts of solar EUV irradiance to drive space weather forecast models. In this presentation we will describe the needs and requirements for solar EUV irradiance information from the space weather modeler's perspective. We will then translate these requirements into solar observational requirements such as spectral resolution and irradiance accuracy. We will also describe the activities at NOAA to provide long-term solar EUV irradiance observations and derived products that are needed for real-time space weather modeling.
Breche, Q; Chagnon, G; Machado, G; Girard, E; Nottelet, B; Garric, X; Favier, D
2016-07-01
PLA-b-PEG-b-PLA is a biodegradable triblock copolymer that presents both the mechanical properties of PLA and the hydrophilicity of PEG. In this paper, physical and mechanical properties of PLA-b-PEG-b-PLA are studied during in vitro degradation. The degradation process leads to a mass loss, a decrease of number average molecular weight and an increase of dispersity index. Mechanical experiments are made in a specific experimental set-up designed to create an environment close to in vivo conditions. The viscoelastic behaviour of the material is studied during the degradation. Finally, the mechanical behaviour is modelled with a linear viscoelastic model. A degradation variable is defined and included in the model to describe the hydrolytic degradation. This variable is linked to physical parameters of the macromolecular polymer network. The model allows us to describe weak deformations but become less accurate for larger deformations. The abilities and limits of the model are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Remote measurement methods for 3-D modeling purposes using BAE Systems' Software
NASA Astrophysics Data System (ADS)
Walker, Stewart; Pietrzak, Arleta
2015-06-01
Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.
Shape Models of Asteroids as a Missing Input for Bulk Density Determinations
NASA Astrophysics Data System (ADS)
Hanuš, Josef
2015-07-01
To determine a meaningful bulk density of an asteroid, both accurate volume and mass estimates are necessary. The volume can be computed by scaling the size of the 3D shape model to fit the disk-resolved images or stellar occultation profiles, which are available in the literature or through collaborations. This work provides a list of asteroids, for which (i) there are already mass estimates with reported uncertainties better than 20% or their mass will be most likely determined in the future from Gaia astrometric observations, and (ii) their 3D shape models are currently unknown. Additional optical lightcurves are necessary to determine the convex shape models of these asteroids. The main aim of this article is to motivate the observers to obtain lightcurves of these asteroids, and thus contribute to their shape model determinations. Moreover, a web page https://asteroid-obs.oca.eu, which maintains an up-to-date list of these objects to assure efficiency and to avoid any overlapping efforts, was created.
Li, Haiyun; Wang, Zheng
2006-01-01
In this paper, a 3D geometric model of the intervertebral and lumbar disks has been presented, which integrated the spine CT and MRI data-based anatomical structure. Based on the geometric model, a 3D finite element model of an L1-L2 segment was created. Loads, which simulate the pressure from above were applied to the FEM, while a boundary condition describing the relative L1-L2 displacement is imposed on the FEM to account for 3D physiological states. The simulation calculation illustrates the stress and strain distribution and deformation of the spine. The method has two characteristics compared to previous studies: first, the finite element model of the lumbar are based on the data directly derived from medical images such as CTs and MRIs. Second, the result of analysis will be more accurate than using the data of geometric parameters. The FEM provides a promising tool in clinical diagnosis and for optimizing individual therapy in the intervertebral disc herniation.
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand
Lauer, Stephen A.; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E.; Salje, Henrik; Cummings, Derek A. T.; Lessler, Justin
2016-01-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making. PMID:27304062
Modeling Altruistic and Aggressive Driver Behavior in a No-Notice Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandstetter, Tim; Garrow, Dr. Laurie; Hunter, Dr. Michael
2007-01-01
This study examines the impact of altruistic and aggressive driver behavior on the effectiveness of an evacuation for a section of downtown Atlanta. The study area includes 37 signalized intersections, seven ramps, and 48 parking lots that vary by size, type (lot versus garage), peak volume, and number of ingress and egress points. A detailed microscopic model of the study area was created in VISSIM. Different scenarios examined the impacts of driver behavior on parking lot discharge rates and the loading rates from side streets on primary evacuation routes. A new methodology was created to accurately represent parking lot dischargemore » rates. This study is also unique in that it assumes a "worst case scenario" that occurs with no advance notice during the morning peak period, when vehicles must transition from inbound to outbound routes. Simulation results indicate that while overall network clearance times are similar across scenarios, the distribution of delay on individual routes and across parking lots differ markedly. More equitable solutions (defined as the allocation of delay from parking lots and side streets to main evacuation routes) were observed with altruistic driver behavior.« less
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand.
Reich, Nicholas G; Lauer, Stephen A; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E; Salje, Henrik; Cummings, Derek A T; Lessler, Justin
2016-06-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Cheung, Connie; Gonzalez, Frank J
2008-01-01
Cytochrome P450s (P450s) are important enzymes involved in the metabolism of xenobiotics, particularly clinically used drugs, and are also responsible for metabolic activation of chemical carcinogens and toxins. Many xenobiotics can activate nuclear receptors that in turn induce the expression of genes encoding xenobiotic metabolizing enzymes and drug transporters. Marked species differences in the expression and regulation of cytochromes P450 and xenobiotic nuclear receptors exist. Thus obtaining reliable rodent models to accurately reflect human drug and carcinogen metabolism is severely limited. Humanized transgenic mice were developed in an effort to create more reliable in vivo systems to study and predict human responses to xenobiotics. Human P450s or human xenobiotic-activated nuclear receptors were introduced directly or replaced the corresponding mouse gene, thus creating “humanized” transgenic mice. Mice expressing human CYP1A1/CYP1A2, CYP2E1, CYP2D6, CYP3A4, CY3A7, PXR, PPARα were generated and characterized. These humanized mouse models offers a broad utility in the evaluation and prediction of toxicological risk that may aid in the development of safer drugs. PMID:18682571
Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge
2017-01-01
Thermophysical properties of a carbon-reinforced epoxy composite laminate (T700/M21 composite for aircraft structures) were evaluated using different innovative characterisation methods. Thermogravimetric Analysis (TGA), Simultaneous Thermal analysis (STA), Laser Flash analysis (LFA), and Fourier Transform Infrared (FTIR) analysis were used for measuring the thermal decomposition, the specific heat capacity, the anisotropic thermal conductivity of the composite, the heats of decomposition and the specific heat capacity of released gases. It permits to get input data to feed a three-dimensional (3D) model given the temperature profile and the mass loss obtained during well-defined fire scenarios (model presented in Part II of this paper). The measurements were optimised to get accurate data. The data also permit to create a public database on an aeronautical carbon fibre/epoxy composite for fire safety engineering. PMID:28772854
Interpreting cost of ownership for mix-and-match lithography
NASA Astrophysics Data System (ADS)
Levine, Alan L.; Bergendahl, Albert S.
1994-05-01
Cost of ownership modeling is a critical and emerging tool that provides significant insight into the ways to optimize device manufacturing costs. The development of a model to deal with a particular application, mix-and-match lithography, was performed in order to determine the level of cost savings and the optimum ways to create these savings. The use of sensitivity analysis with cost of ownership allows the user to make accurate trade-offs between technology and cost. The use and interpretation of the model results are described in this paper. Parameters analyzed include several manufacturing considerations -- depreciation, maintenance, engineering and operator labor, floorspace, resist, consumables and reticles. Inherent in this study is the ability to customize this analysis for a particular operating environment. Results demonstrate the clear advantages of a mix-and-match approach for three different operating environments. These case studies also demonstrate various methods to efficiently optimize cost savings strategies.
Supporting Collaborative Health Tracking in the Hospital: Patients’ Perspectives
Mishra, Sonali R.; Miller, Andrew D.; Haldar, Shefali; Khelifi, Maher; Eschler, Jordan; Elera, Rashmi G.; Pollack, Ari H; Pratt, Wanda
2018-01-01
The hospital setting creates a high-stakes environment where patients’ lives depend on accurate tracking of health data. Despite recent work emphasizing the importance of patients’ engagement in their own health care, less is known about how patients track their health and care in the hospital. Through interviews and design probes, we investigated hospitalized patients’ tracking activity and analyzed our results using the stage-based personal informatics model. We used this model to understand how to support the tracking needs of hospitalized patients at each stage. In this paper, we discuss hospitalized patients’ needs for collaboratively tracking their health with their care team. We suggest future extensions of the stage-based model to accommodate collaborative tracking situations, such as hospitals, where data is collected, analyzed, and acted on by multiple people. Our findings uncover new directions for HCI research and highlight ways to support patients in tracking their care and improving patient safety. PMID:29721554
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem
NASA Astrophysics Data System (ADS)
Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron
2015-03-01
In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.
PREDICTING INDIVIDUAL WELL-BEING THROUGH THE LANGUAGE OF SOCIAL MEDIA.
Schwartz, H Andrew; Sap, Maarten; Kern, Margaret L; Eichstaedt, Johannes C; Kapelner, Adam; Agrawal, Megha; Blanco, Eduardo; Dziurzynski, Lukasz; Park, Gregory; Stillwell, David; Kosinski, Michal; Seligman, Martin E P; Ungar, Lyle H
2016-01-01
We present the task of predicting individual well-being, as measured by a life satisfaction scale, through the language people use on social media. Well-being, which encompasses much more than emotion and mood, is linked with good mental and physical health. The ability to quickly and accurately assess it can supplement multi-million dollar national surveys as well as promote whole body health. Through crowd-sourced ratings of tweets and Facebook status updates, we create message-level predictive models for multiple components of well-being. However, well-being is ultimately attributed to people, so we perform an additional evaluation at the user-level, finding that a multi-level cascaded model, using both message-level predictions and userlevel features, performs best and outperforms popular lexicon-based happiness models. Finally, we suggest that analyses of language go beyond prediction by identifying the language that characterizes well-being.
NASA Astrophysics Data System (ADS)
Gridan, Maria-Roberta; Herban, Sorin; Grecea, Oana
2017-07-01
Nowadays, the engineering companies and contractors are facing challenges never experienced before. They are being charged with - and being held liable for - the health of the structures they create and maintain. To surmount these challenges, engineers need to be able to measure structural movements up to millimetre level accuracy. Accurate and timely information on the status of a structure is highly valuable to engineers. It enables them to compare the real world behaviour of a structure against the design and theoretical models. When empowered by such data, engineers can effectively and cost efficiently measure and maintain the health of vital infrastructure. This paper presents the interpretation of the draft tube topographical measurements in order to obtain its 3D model. Based on the documents made available by the beneficiary and the data obtained in situ, the modernization conclusions were presented.
Giehr, Pascal; Kyriakopoulos, Charalampos; Ficz, Gabriella; Wolf, Verena; Walter, Jörn
2016-05-01
DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance.
Seismic Waves, 4th order accurate
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-08-16
SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Throckmorton, D. A.
1981-01-01
A study was completed to determine the sensitivity of computed convective heating rates to uncertainties in the thermal protection system thermal model. Those parameters considered were: density, thermal conductivity, and specific heat of both the reusable surface insulation and its coating; coating thickness and emittance; and temperature measurement uncertainty. The assessment used a modified version of the computer program to calculate heating rates from temperature time histories. The original version of the program solves the direct one dimensional heating problem and this modified version of The program is set up to solve the inverse problem. The modified program was used in thermocouple data reduction for shuttle flight data. Both nominal thermal models and altered thermal models were used to determine the necessity for accurate knowledge of thermal protection system's material thermal properties. For many thermal properties, the sensitivity (inaccuracies created in the calculation of convective heating rate by an altered property) was very low.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Partridge Jr, William P.; Choi, Jae-Soon
By directly resolving spatial and temporal species distributions within operating honeycomb monolith catalysts, spatially resolved capillary inlet mass spectrometry (SpaciMS) provides a uniquely enabling perspective for advancing automotive catalysis. Specifically, the ability to follow the spatiotemporal evolution of reactions throughout the catalyst is a significant advantage over inlet-and-effluent-limited analysis. Intracatalyst resolution elucidates numerous catalyst details including the network and sequence of reactions, clarifying reaction pathways; the relative rates of different reactions and impacts of operating conditions and catalyst state; and reaction dynamics and intermediate species that exist only within the catalyst. These details provide a better understanding of how themore » catalyst functions and have basic and practical benefits; e.g., catalyst system design; strategies for on-road catalyst state assessment, control, and on-board diagnostics; and creating robust and accurate predictive catalyst models. Moreover, such spatiotemporally distributed data provide for critical model assessment, and identification of improvement opportunities that might not be apparent from effluent assessment; i.e., while an incorrectly formulated model may provide correct effluent predictions, one that can accurately predict the spatiotemporal evolution of reactions along the catalyst channels will be more robust, accurate, and reliable. In such ways, intracatalyst diagnostics comprehensively enable improved design and development tools, and faster and lower-cost development of more efficient and durable automotive catalyst systems. Beyond these direct contributions, SpaciMS has spawned and been applied to enable other analytical techniques for resolving transient distributed intracatalyst performance. This chapter focuses on SpaciMS applications and associated catalyst insights and improvements, with specific sections related to lean NOx traps, selective catalytic reduction catalysts, oxidation catalysts, and particulate filters. The objective is to promote broader use and development of intracatalyst analytical methods, and thereby expand the insights resulting from this detailed perspective for advancing automotive catalyst technologies.« less
Hughesman, Curtis; Fakhfakh, Kareem; Bidshahri, Roza; Lund, H Louise; Haynes, Charles
2015-02-17
Advances in real-time polymerase chain reaction (PCR), as well as the emergence of digital PCR (dPCR) and useful modified nucleotide chemistries, including locked nucleic acids (LNAs), have created the potential to improve and expand clinical applications of PCR through their ability to better quantify and differentiate amplification products, but fully realizing this potential will require robust methods for designing dual-labeled hydrolysis probes and predicting their hybridization thermodynamics as a function of their sequence, chemistry, and template complementarity. We present here a nearest-neighbor thermodynamic model that accurately predicts the melting thermodynamics of a short oligonucleotide duplexed either to its perfect complement or to a template containing mismatched base pairs. The model may be applied to pure-DNA duplexes or to duplexes for which one strand contains any number and pattern of LNA substitutions. Perturbations to duplex stability arising from mismatched DNA:DNA or LNA:DNA base pairs are treated at the Gibbs energy level to maintain statistical significance in the regressed model parameters. This approach, when combined with the model's accounting of the temperature dependencies of the melting enthalpy and entropy, permits accurate prediction of T(m) values for pure-DNA homoduplexes or LNA-substituted heteroduplexes containing one or two independent mismatched base pairs. Terms accounting for changes in solution conditions and terminal addition of fluorescent dyes and quenchers are then introduced so that the model may be used to accurately predict and thereby tailor the T(m) of a pure-DNA or LNA-substituted hydrolysis probe when duplexed either to its perfect-match template or to a template harboring a noncomplementary base. The model, which builds on classic nearest-neighbor thermodynamics, should therefore be of use to clinicians and biologists who require probes that distinguish and quantify two closely related alleles in either a quantitative PCR or dPCR assay. This potential is demonstrated by using the model to design allele-specific probes that completely discriminate and quantify clinically relevant mutant alleles (BRAF V600E and KIT D816V) in a dPCR assay.
A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making
Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W.
2013-01-01
Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping. PMID:24391781
A critical meta-analysis of lens model studies in human judgment and decision-making.
Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W
2013-01-01
Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.
Development of the GPM Observatory Thermal Vacuum Test Model
NASA Technical Reports Server (NTRS)
Yang, Kan; Peabody, Hume
2012-01-01
A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.
Burkhardt, John C; DesJardins, Stephen L; Teener, Carol A; Gay, Steven E; Santen, Sally A
2016-11-01
In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). U-M and American Medical College Application Service data (2006-2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic "dashboard" was created for practical use. Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.
Real Time Updating Genetic Network Programming for Adapting to the Change of Stock Prices
NASA Astrophysics Data System (ADS)
Chen, Yan; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro
The key in stock trading model is to take the right actions for trading at the right time, primarily based on the accurate forecast of future stock trends. Since an effective trading with given information of stock prices needs an intelligent strategy for the decision making, we applied Genetic Network Programming (GNP) to creating a stock trading model. In this paper, we propose a new method called Real Time Updating Genetic Network Programming (RTU-GNP) for adapting to the change of stock prices. There are three important points in this paper: First, the RTU-GNP method makes a stock trading decision considering both the recommendable information of technical indices and the candlestick charts according to the real time stock prices. Second, we combine RTU-GNP with a Sarsa learning algorithm to create the programs efficiently. Also, sub-nodes are introduced in each judgment and processing node to determine appropriate actions (buying/selling) and to select appropriate stock price information depending on the situation. Third, a Real Time Updating system has been firstly introduced in our paper considering the change of the trend of stock prices. The experimental results on the Japanese stock market show that the trading model with the proposed RTU-GNP method outperforms other models without real time updating. We also compared the experimental results using the proposed method with Buy&Hold method to confirm its effectiveness, and it is clarified that the proposed trading model can obtain much higher profits than Buy&Hold method.
Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.
Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A
2015-08-01
Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
A Multi-Fidelity Surrogate Model for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S."bala"
2016-11-01
The explosive dispersal of particles is an example of a complex multiphase and multi-species fluid flow problem. This problem has many engineering applications including particle-laden explosives. In these flows, the detonation products of the explosive cannot be treated as a perfect gas so a real gas equation of state is used to close the governing equations (unlike air, which uses the ideal gas equation for closure). As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both of the state equations must be satisfied. One of the more accurate, yet computationally expensive, methods to deal with this is a scheme that iterates between the two equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work strives to create a multi-fidelity surrogate model of this process. We then study the performance of the model with respect to the iterative method by performing both gas-only and particle laden flow simulations using an Eulerian-Lagrangian approach with a finite volume code. Specifically, the model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel modeling approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Prediction of XV-15 tilt rotor discrete frequency aeroacoustic noise with WOPWOP
NASA Technical Reports Server (NTRS)
Coffen, Charles D.; George, Albert R.
1990-01-01
The results, methodology, and conclusions of noise prediction calculations carried out to study several possible discrete frequency harmonic noise mechanisms of the XV-15 Tilt Rotor Aircraft in hover and helicopter mode forward flight are presented. The mechanisms studied were thickness and loading noise. In particular, the loading noise caused by flow separation and the fountain/ground plane effect were predicted with calculations made using WOPWOP, a noise prediction program developed by NASA Langley. The methodology was to model the geometry and aerodynamics of the XV-15 rotor blades in hover and steady level flight and then create corresponding FORTRAN subroutines which were used an input for WOPWOP. The models are described and the simplifying assumptions made in creating them are evaluated, and the results of the computations are presented. The computations lead to the following conclusions: The fountain/ground plane effect is an important source of aerodynamic noise for the XV-15 in hover. Unsteady flow separation from the airfoil passing through the fountain at high angles of attack significantly affects the predicted sound spectra and may be an important noise mechanism for the XV-15 in hover mode. The various models developed did not predict the sound spectra in helicopter forward flight. The experimental spectra indicate the presence of blade vortex interactions which were not modeled in these calculations. A need for further study and development of more accurate aerodynamic models, including unsteady stall in hover and blade vortex interactions in forward flight.
Models of Sector Flows Under Local, Regional and Airport Weather Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
Models of Sector Aircraft Counts in the Presence of Local, Regional and Airport Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
NASA Astrophysics Data System (ADS)
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
A Method to Represent Heterogeneous Materials for Rapid Prototyping: The Matryoshka Approach
Lei, Shuangyan; Frank, Matthew C.; Anderson, Donald D.; Brown, Thomas D.
2015-01-01
Purpose The purpose of this paper is to present a new method for representing heterogeneous materials using nested STL shells, based, in particular, on the density distributions of human bones. Design/methodology/approach Nested STL shells, called Matryoshka models, are described, based on their namesake Russian nesting dolls. In this approach, polygonal models, such as STL shells, are “stacked” inside one another to represent different material regions. The Matryoshka model addresses the challenge of representing different densities and different types of bone when reverse engineering from medical images. The Matryoshka model is generated via an iterative process of thresholding the Hounsfield Unit (HU) data using computed tomography (CT), thereby delineating regions of progressively increasing bone density. These nested shells can represent regions starting with the medullary (bone marrow) canal, up through and including the outer surface of the bone. Findings The Matryoshka approach introduced can be used to generate accurate models of heterogeneous materials in an automated fashion, avoiding the challenge of hand-creating an assembly model for input to multi-material additive or subtractive manufacturing. Originality/Value This paper presents a new method for describing heterogeneous materials: in this case, the density distribution in a human bone. The authors show how the Matryoshka model can be used to plan harvesting locations for creating custom rapid allograft bone implants from donor bone. An implementation of a proposed harvesting method is demonstrated, followed by a case study using subtractive rapid prototyping to harvest a bone implant from a human tibia surrogate. PMID:26120277
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Shibata, Eisuke; Takao, Hidemasa; Amemiya, Shiori; Ohtomo, Kuni
2017-08-01
The objective of this study is to verify the accuracy of 3D-printed hollow models of visceral aneurysms created from CT angiography (CTA) data, by evaluating the sizes and shapes of aneurysms and related arteries. From March 2006 to August 2015, 19 true visceral aneurysms were embolized via interventional radiologic treatment provided by the radiology department at our institution; aneurysms with bleeding (n = 3) or without thin-slice (< 1 mm) preembolization CT data (n = 1) were excluded. A total of 15 consecutive true visceral aneurysms from 11 patients (eight women and three men; mean age, 61 years; range, 53-72 years) whose aneurysms were embolized via endovascular procedures were included in this study. Three-dimensional-printed hollow models of aneurysms and related arteries were fabricated from CTA data. The accuracies of the sizes and shapes of the 3D-printed hollow models were evaluated using the nonparametric Wilcoxon signed rank test and the Dice coefficient index. Aneurysm sizes ranged from 138 to 18,691 mm 3 (diameter, 6.1-35.7 mm), and no statistically significant difference was noted between patient data and 3D-printed models (p = 0.56). Shape analysis of whole aneurysms and related arteries indicated a high level of accuracy (Dice coefficient index value, 84.2-95.8%; mean [± SD], 91.1 ± 4.1%). The sizes and shapes of 3D-printed hollow visceral aneurysm models created from CTA data were accurate. These models can be used for simulations of endovascular treatment and precise anatomic information.
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun
2017-06-01
Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Accurate, current inventory for each select agent (including viral genetic elements, recombinant nucleic... individual or entity must implement a system to ensure that all records and data bases created under this...
Code of Federal Regulations, 2013 CFR
2013-10-01
... and/or synthetic nucleic acids, and organisms containing recombinant and/or synthetic nucleic acids... that all records and data bases created under this part are accurate, have controlled access, and that...
Code of Federal Regulations, 2014 CFR
2014-10-01
... and/or synthetic nucleic acids, and organisms containing recombinant and/or synthetic nucleic acids... that all records and data bases created under this part are accurate, have controlled access, and that...
Code of Federal Regulations, 2011 CFR
2011-01-01
... granted access approval by the Administrator or the HHS Secretary; (4) Information about all entries into... records and databases created under this part are accurate, have controlled access, and that their...
Modeling of surface dust concentrations using neural networks and kriging
NASA Astrophysics Data System (ADS)
Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.
2016-12-01
Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.
Madanat, Rami; Moritz, Niko; Aro, Hannu T
2007-01-01
Physical phantom models have conventionally been used to determine the accuracy and precision of radiostereometric analysis (RSA) in various orthopaedic applications. Using a phantom model of a fracture of the distal radius it has previously been shown that RSA is a highly accurate and precise method for measuring both translation and rotation in three-dimensions (3-D). The main shortcoming of a physical phantom model is its inability to mimic complex 3-D motion. The goal of this study was to create a realistic computer model for preoperative planning of RSA studies and to test the accuracy of RSA in measuring complex movements in fractures of the distal radius using this new model. The 3-D computer model was created from a set of tomographic scans. The simulation of the radiographic imaging was performed using ray-tracing software (POV-Ray). RSA measurements were performed according to standard protocol. Using a two-part fracture model (AO/ASIF type A2), it was found that for simple movements in one axis, translations in the range of 25microm-2mm could be measured with an accuracy of +/-2microm. Rotations ranging from 16 degrees to 2 degrees could be measured with an accuracy of +/-0.015 degrees . Using a three-part fracture model the corresponding values of accuracy were found to be +/-4microm and +/-0.031 degrees for translation and rotation, respectively. For complex 3-D motion in a three-part fracture model (AO/ASIF type C1) the accuracy was +/-6microm for translation and +/-0.120 degrees for rotation. The use of 3-D computer modelling can provide a method for preoperative planning of RSA studies in complex fractures of the distal radius and in other clinical situations in which the RSA method is applicable.
NASA Astrophysics Data System (ADS)
Crosby, S. C.; O'Reilly, W. C.; Guza, R. T.
2016-02-01
Accurate, unbiased, high-resolution (in space and time) nearshore wave predictions are needed to drive models of beach erosion, coastal flooding, and alongshore transport of sediment, biota and pollutants. On highly sheltered shorelines, wave predictions are sensitive to the directions of onshore propagating waves, and nearshore model prediction error is often dominated by uncertainty in offshore boundary conditions. Offshore islands and shoals, and coastline curvature, create complex sheltering patterns over the 250km span of southern California (SC) shoreline. Here, regional wave model skill in SC was compared for different offshore boundary conditions created using offshore buoy observations and global wave model hindcasts (National Oceanographic and Atmospheric Administration Wave Watch 3, WW3). Spectral ray-tracing methods were used to transform incident offshore swell (0.04-0.09Hz) energy at high directional resolution (1-deg). Model skill is assessed for predictions (wave height, direction, and alongshore radiation stress) at 16 nearshore buoy sites between 2000 and 2009. Model skill using buoy-derived boundary conditions is higher than with WW3-derived boundary conditions. Buoy-driven nearshore model results are similar with various assumptions about the true offshore directional distribution (maximum entropy, Bayesian direct, and 2nd derivative smoothness). Two methods combining offshore buoy observations with WW3 predictions in the offshore boundary condition did not improve nearshore skill above buoy-only methods. A case example at Oceanside harbor shows strong sensitivity of alongshore sediment transport predictions to different offshore boundary conditions. Despite this uncertainty in alongshore transport magnitude, alongshore gradients in transport (e.g. the location of model accretion and erosion zones) are determined by the local bathymetry, and are similar for all predictions.
Kihara, Takuya; Yoshimi, Yuki; Taji, Tsuyoshi; Murayama, Takeshi; Tanimoto, Kotaro; Nikawa, Hiroki
2016-08-01
For orthodontic treatment, it is important to assess the dental morphology, as well as the position and inclination of teeth. The aim of this article was to develop an efficient and accurate method for the three-dimensional (3D) imaging of the maxillary and mandibular dental morphology by measuring interocclusal records using an optical scanner. The occlusal and incisal morphology of participants was registered in the intercuspal position using a hydrophilic vinyl polysiloxane and digitized into 3D models using an optical scanner. Impressions were made of the maxilla and mandible in alginate materials in order to fabricate plaster models and created into 3D models using the optical scanner based on the principal triangulation method. The occlusal and incisal areas of the interocclusal records were retained. The buccal and lingual areas were added to these regions entirely by the 3D model of the plaster model. The accuracy of this method was evaluated for each tooth, with the dental cast 3D models used as controls. The 3D model created from the interocclusal record and the plaster model of the dental morphology was analysed in 3D software. The difference between the controls and the 3D models digitized from the interocclusal records was 0.068±0.048mm, demonstrating the accuracy of this method. The presence of severe crowding may compromise the ability to separate each tooth and digitize the dental morphology. The digitization method in this study provides sufficient accuracy to visualize the dental morphology, as well as the position and inclination of these teeth. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldi, Giulio Francesco; Bozza, Valerio, E-mail: giuliofrancesco.aldi@sa.infn.it, E-mail: valboz@sa.infn.it
The shapes of relativistic iron lines observed in spectra of candidate black holes carry the signatures of the strong gravitational fields in which the accretion disks lie. These lines result from the sum of the contributions of all images of the disk created by gravitational lensing, with the direct and first-order images largely dominating the overall shapes. Higher order images created by photons tightly winding around the black holes are often neglected in the modeling of these lines, since they require a substantially higher computational effort. With the help of the strong deflection limit, we present the most accurate semi-analyticalmore » calculation of these higher order contributions to the iron lines for Schwarzschild black holes. We show that two regimes exist depending on the inclination of the disk with respect to the line of sight. Many useful analytical formulae can be also derived in this framework.« less
Testing for Polytomies in Phylogenetic Species Trees Using Quartet Frequencies.
Sayyari, Erfan; Mirarab, Siavash
2018-02-28
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the Accurate Species TRee ALgorithm (ASTRAL) package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest.
Testing for Polytomies in Phylogenetic Species Trees Using Quartet Frequencies
Sayyari, Erfan
2018-01-01
Phylogenetic species trees typically represent the speciation history as a bifurcating tree. Speciation events that simultaneously create more than two descendants, thereby creating polytomies in the phylogeny, are possible. Moreover, the inability to resolve relationships is often shown as a (soft) polytomy. Both types of polytomies have been traditionally studied in the context of gene tree reconstruction from sequence data. However, polytomies in the species tree cannot be detected or ruled out without considering gene tree discordance. In this paper, we describe a statistical test based on properties of the multi-species coalescent model to test the null hypothesis that a branch in an estimated species tree should be replaced by a polytomy. On both simulated and biological datasets, we show that the null hypothesis is rejected for all but the shortest branches, and in most cases, it is retained for true polytomies. The test, available as part of the Accurate Species TRee ALgorithm (ASTRAL) package, can help systematists decide whether their datasets are sufficient to resolve specific relationships of interest. PMID:29495636
Shao, Weixiang; Adams, Clive E; Cohen, Aaron M; Davis, John M; McDonagh, Marian S; Thakurta, Sujata; Yu, Philip S; Smalheiser, Neil R
2015-03-01
It is important to identify separate publications that report outcomes from the same underlying clinical trial, in order to avoid over-counting these as independent pieces of evidence. We created positive and negative training sets (comprised of pairs of articles reporting on the same condition and intervention) that were, or were not, linked to the same clinicaltrials.gov trial registry number. Features were extracted from MEDLINE and PubMed metadata; pairwise similarity scores were modeled using logistic regression. Article pairs from the same trial were identified with high accuracy (F1 score=0.843). We also created a clustering tool, Aggregator, that takes as input a PubMed user query for RCTs on a given topic, and returns article clusters predicted to arise from the same clinical trial. Although painstaking examination of full-text may be needed to be conclusive, metadata are surprisingly accurate in predicting when two articles derive from the same underlying clinical trial. Copyright © 2014 Elsevier Inc. All rights reserved.
Photogrammetry for Archaeology: Collecting Pieces Together
NASA Astrophysics Data System (ADS)
Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.
2018-05-01
The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.
Narayanan, Ranjit; Karuthedath Vellarikkal, Shamsudheen; Jayarajan, Rijith; Verma, Ankit; Dixit, Vishal; Scaria, Vinod; Sivasubbu, Sridhar
2017-01-01
Syndromes of mineralocorticoid excess (SME) are closely related clinical manifestations occurring within a specific set of diseases. Overlapping clinical manifestations of such syndromes often create a dilemma in accurate diagnosis, which is crucial for disease surveillance and management especially in rare genetic disorders. Here we demonstrate the use of whole exome sequencing (WES) for accurate diagnosis of rare SME and report that p.R337C variation in the HSD11B2 gene causes progressive apparent mineralocorticoid excess (AME) syndrome in a South Indian family of Mappila origin. PMID:29067160
Modeling pesticide fate in a small tidal estuary
McCarthy, A.M.; Bales, J.D.; Cope, W.G.; Shea, D.
2007-01-01
The exposure analysis modeling system (EXAMS), a pesticide fate model developed by the U.S. Environmental Protection Agency, was modified to model the fate of the herbicides atrazine and metolachlor in a small tidally dominated estuary (Bath Creek) in North Carolina, USA where freshwater inflow accounts for only 3% of the total flow. The modifications simulated the changes that occur during the tidal cycle in the estuary, scenarios that are not possible with the original EXAMS model. Two models were created within EXAMS, a steady-state model and a time-variant tidally driven model. The steady-state model accounted for tidal flushing by simply altering freshwater input to yield an estuary residence time equal to that measured in Bath Creek. The tidal EXAMS model explicitly incorporated tidal flushing by modifying the EXAMS code to allow for temporal changes in estuary physical attributes (e.g., volume). The models were validated with empirical measurements of atrazine and metolachlor concentrations in the estuary shortly after herbicide application in nearby fields and immediately following a rain event. Both models provided excellent agreement with measured concentrations. The steady-state EXAMS model accurately predicted atrazine concentrations in the middle of the estuary over the first 3 days and under-predicted metolachlor by a factor of 2-3. The time-variant, tidally driven EXAMS model accurately predicted the rise and plateau of both herbicides over the 6-day measurement period. We have demonstrated the ability of these modified EXAMS models to be useful in predicting pesticide fate and exposure in small tidal estuaries. This is a significant improvement and expansion of the application of EXAMS, and given the wide use of EXAMS for surface water quality modeling by both researchers and regulators and the ability of EXAMS to interface with terrestrial models (e.g., pesticide root zone model) and bioaccumulation models, we now have an easily-accessible and widely accepted means of modeling chemical fate in estuaries. ?? 2006 Elsevier B.V. All rights reserved.
Fabrication of a 3D Printing Definitive Obturator Prosthesis: a Clinical Report.
Tasopoulos, Theodoros; Kouveliotis, Georgios; Polyzois, Grigoris; Karathanasi, Vasiliki
2017-03-01
Digital technologies related to imaging and manufacturing provide the clinician with a wide variety of treatment options. Stereolithography (SLA) offers a simple and predictable way for an accurate reconstruction of congenital or acquired defects. A 65-years old cancer patient with non- keratinized squamous cell carcinoma of left maxillary sinus came for a prosthetic clinical evaluation. A bilateral maxillectomy was performed and the treatment plan included definite obturator prosthesis for the upper arch. CT data and 3D planning software were used to create a 3D printing plastic model of the defect. A wax pattern of the hollow bulb was fabricated and cured with heat-cured silicone soft liner. A final impression was obtained with the hollow bulb placed intraorally. The master cast was duplicated and the new cast was invested and reflasked. The flasks were opened, wax was boiled out and some space was created in the internal part of the obturator. Transparent heat cured acrylic resin was sandwiched with, at the inner part of the bulb, improving the retention between the acrylic denture base and the silicone based soft lining material. The patient was then placed on a 6-month recall. The five-year follow up consists of a chair side relining, when needed, of the definite removable prostheses. Maxillofacial surgery patients may develop postoperative complications such as trismus and pain. In these cases, the combination of digital technology and conventional techniques provide an accurate prosthetic restoration.
Fabrication of a 3D Printing Definitive Obturator Prosthesis: a Clinical Report
Kouveliotis, Georgios; Polyzois, Grigoris; Karathanasi, Vasiliki
2017-01-01
Introduction Digital technologies related to imaging and manufacturing provide the clinician with a wide variety of treatment options. Stereolithography (SLA) offers a simple and predictable way for an accurate reconstruction of congenital or acquired defects. Clinical case A 65-years old cancer patient with non- keratinized squamous cell carcinoma of left maxillary sinus came for a prosthetic clinical evaluation. A bilateral maxillectomy was performed and the treatment plan included definite obturator prosthesis for the upper arch. CT data and 3D planning software were used to create a 3D printing plastic model of the defect. A wax pattern of the hollow bulb was fabricated and cured with heat-cured silicone soft liner. A final impression was obtained with the hollow bulb placed intraorally. The master cast was duplicated and the new cast was invested and reflasked. The flasks were opened, wax was boiled out and some space was created in the internal part of the obturator. Transparent heat cured acrylic resin was sandwiched with, at the inner part of the bulb, improving the retention between the acrylic denture base and the silicone based soft lining material. The patient was then placed on a 6-month recall. The five-year follow up consists of a chair side relining, when needed, of the definite removable prostheses. Conclusion Maxillofacial surgery patients may develop postoperative complications such as trismus and pain. In these cases, the combination of digital technology and conventional techniques provide an accurate prosthetic restoration. PMID:28740271
Vahdani, Soheil; Ramos, Hector
2017-01-01
Background Three-dimensional (3D) printing is relatively a new technology with clinical applications, which enable us to create rapid accurate prototype of the selected anatomic region, making it possible to plan complex surgery and pre-bend hardware for individual surgical cases. This study aimed to express our experience with the use of medical rapid prototype (MRP) of the maxillofacial region created by desktop 3D printer and its application in maxillofacial reconstructive surgeries. Material and Methods Three patients with benign mandible tumors were included in this study after obtaining informed consent. All patient’s maxillofacial CT scan data was processed by segmentation and isolation software and mandible MRP was printed using our desktop 3D printer. These models were used for preoperative surgical planning and prebending of the reconstruction plate. Conclusions MRP created by desktop 3D printer is a cost-efficient, quick and easily produced appliance for the planning of reconstructive surgery. It can contribute in patient orientation and helping them in a better understanding of their condition and proposed surgical treatment. It helps surgeons for pre-operative planning in the resection or reconstruction cases and represent an excellent tool in academic setting for residents training. The pre-bended reconstruction plate based on MRP, resulted in decreased surgery time, cost and anesthesia risks on the patients. Key words:3D printing, medical modeling, rapid prototype, mandibular reconstruction, ameloblastoma. PMID:29075412
Kinect Fusion improvement using depth camera calibration
NASA Astrophysics Data System (ADS)
Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.
2014-06-01
Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.
Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian
2016-05-01
Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Ahmed, Mahmoud; Eslamian, Morteza
2015-12-01
Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.
STEAM: a software tool based on empirical analysis for micro electro mechanical systems
NASA Astrophysics Data System (ADS)
Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat
2006-03-01
In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.
Vaquerizo, Beatriz; Theriault-Lauzier, Pascal; Piazza, Nicolo
2015-12-01
Mitral regurgitation is the most prevalent valvular heart disease worldwide. Despite the widespread availability of curative surgical intervention, a considerable proportion of patients with severe mitral regurgitation are not referred for treatment, largely due to the presence of left ventricular dysfunction, advanced age, and comorbid illnesses. Transcatheter mitral valve replacement is a promising therapeutic alternative to traditional surgical valve replacement. The complex anatomical and pathophysiological nature of the mitral valvular complex, however, presents significant challenges to the successful design and implementation of novel transcatheter mitral replacement devices. Patient-specific 3-dimensional computer-based models enable accurate assessment of the mitral valve anatomy and preprocedural simulations for transcatheter therapies. Such information may help refine the design features of novel transcatheter mitral devices and enhance procedural planning. Herein, we describe a novel medical image-based processing tool that facilitates accurate, noninvasive assessment of the mitral valvular complex, by creating precise three-dimensional heart models. The 3-dimensional computer reconstructions are then converted to a physical model using 3-dimensional printing technology, thereby enabling patient-specific assessment of the interaction between device and patient. It may provide new opportunities for a better understanding of the mitral anatomy-pathophysiology-device interaction, which is of critical importance for the advancement of transcatheter mitral valve replacement. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
2016-01-01
Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand–target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722
An advanced approach for computer modeling and prototyping of the human tooth.
Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K
2003-05-01
This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.
Using multi-class queuing network to solve performance models of e-business sites.
Zheng, Xiao-ying; Chen, De-ren
2004-01-01
Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
Content Is King: Databases Preserve the Collective Information of Science.
Yates, John R
2018-04-01
Databases store sequence information experimentally gathered to create resources that further science. In the last 20 years databases have become critical components of fields like proteomics where they provide the basis for large-scale and high-throughput proteomic informatics. Amos Bairoch, winner of the Association of Biomolecular Resource Facilities Frederick Sanger Award, has created some of the important databases proteomic research depends upon for accurate interpretation of data.
No easy answers: issues in accounting for hospital acquisitions.
Bernstein, K R; Alexander, D E
1986-07-01
A real and often overlooked concern created by the Deficit Reduction Act of 1984 is the effect these amendments have on the cash flow generated by an acquisition. Financial managers, therefore, should be aware of the accounting issues created by the act--depreciation and its effect on appraised values, reevaluation to the present value of assumed indebtness, and capitalization of acquisition costs--to accurately evaluate the feasibility of an acquisition.
Age determination of soft tissue hematomas.
Neumayer, Bernhard; Hassler, Eva; Petrovic, Andreas; Widek, Thomas; Ogris, Kathrin; Scheurer, Eva
2014-11-01
In clinical forensic medicine, the estimation of the age of injuries such as externally visible subcutaneous hematomas is important for the reconstruction of violent events, particularly to include or exclude potential suspects. Since the estimation of the time of origin based on external inspection is unreliable, the aim of this study was to use contrast in MRI to develop an easy-to-use model for hematoma age estimation. In a longitudinal study, artificially created subcutaneous hematomas were repetitively imaged using MRI over a period of two weeks. The hemorrhages were created by injecting autologous blood into the subcutaneous tissue of the thigh in 20 healthy volunteers. For MRI, standard commercially available sequences, namely proton-density-weighted, T2 -weighted and inversion recovery sequences, were used. The hematomas' MRI data were analyzed regarding their contrast behavior using the most suitable sequences to derive a model allowing an objective estimation of the age of soft tissue hematomas. The Michelson contrast between hematoma and muscle in the proton-density-weighted sequence showed an exponentially decreasing behavior with a dynamic range of 0.6 and a maximum standard deviation of 0.1. The contrast of the inversion recovery sequences showed increasing characteristics and was hypointense for TI = 200ms and hyperintense for TI =1000ms. These sequences were used to create a contrast model. The cross-validation of the model finally yielded limits of agreement for hematoma age determination (corresponding to ±1.96 SD) of ±38.7h during the first three days and ±54 h for the entire investigation period. The developed model provides lookup tables which allow for the estimation of a hematoma's age given a single contrast measurement applicable by a radiologist or a forensic physician. This is a first step towards an accurate and objective dating method for subcutaneous hematomas, which will be particularly useful in child abuse. Copyright © 2014 John Wiley & Sons, Ltd.
Gomes, Anna; van der Wijk, Lars; Proost, Johannes H; Sinha, Bhanu; Touw, Daan J
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients.
van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.
2017-01-01
Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid potential underdosing of gentamicin in endocarditis patients. PMID:28475651
Selective 4D modelling framework for spatial-temporal land information management system
NASA Astrophysics Data System (ADS)
Doulamis, Anastasios; Soile, Sofia; Doulamis, Nikolaos; Chrisouli, Christina; Grammalidis, Nikos; Dimitropoulos, Kosmas; Manesis, Charalambos; Potsiou, Chryssy; Ioannidis, Charalabos
2015-06-01
This paper introduces a predictive (selective) 4D modelling framework where only the spatial 3D differences are modelled at the forthcoming time instances, while regions of no significant spatial-temporal alterations remain intact. To accomplish this, initially spatial-temporal analysis is applied between 3D digital models captured at different time instances. So, the creation of dynamic change history maps is made. Change history maps indicate spatial probabilities of regions needed further 3D modelling at forthcoming instances. Thus, change history maps are good examples for a predictive assessment, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 4D Land Information Management System (LIMS) is implemented using open interoperable standards based on the CityGML framework. CityGML allows the description of the semantic metadata information and the rights of the land resources. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 4D LIMS digital parcels and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. An application is made to detect the change through time of a 3D block of plots in an urban area of Athens, Greece. Starting with an accurate 3D model of the buildings in 1983, a change history map is created using automated dense image matching on aerial photos of 2010. For both time instances meshes are created and through their comparison the changes are detected.
NASA Technical Reports Server (NTRS)
Chronis, Themis; Case, Jonathan L.; Papadopoulos, Anastasios; Anagnostou, Emmanouil N.; Mecikalski, John R.; Haines, Stephanie L.
2008-01-01
Forecasting atmospheric and oceanic circulations accurately over the Eastern Mediterranean has proved to be an exceptional challenge. The existence of fine-scale topographic variability (land/sea coverage) and seasonal dynamics variations can create strong spatial gradients in temperature, wind and other state variables, which numerical models may have difficulty capturing. The Hellenic Center for Marine Research (HCMR) is one of the main operational centers for wave forecasting in the eastern Mediterranean. Currently, HCMR's operational numerical weather/ocean prediction model is based on the coupled Eta/Princeton Ocean Model (POM). Since 1999, HCMR has also operated the POSEIDON floating buoys as a means of state-of-the-art, real-time observations of several oceanic and surface atmospheric variables. This study attempts a first assessment at improving both atmospheric and oceanic prediction by initializing a regional Numerical Weather Prediction (NWP) model with high-resolution sea surface temperatures (SST) from remotely sensed platforms in order to capture the small-scale characteristics.
Transport and radiative impacts of atmospheric pollen using online, observation-based emissions
NASA Astrophysics Data System (ADS)
Wozniak, M. C.; Steiner, A. L.; Solmon, F.; Li, Y.
2015-12-01
Atmospheric pollen emitted from trees and grasses exhibits both a high temporal variability and a highly localized spatial distribution that has been difficult to quantify in the atmosphere. Pollen's radiative impact is also not quantified because it is neglected in climate modeling studies. Here we couple an online, meteorological active pollen emissions model guided by observations of airborne pollen to understand the role of pollen in the atmosphere. We use existing pollen counts from 2003-2008 across the continental U.S. in conjunction with a tree database and historical meteorological data to create an observation-based phenological model that produces accurately scaled and timed emissions. These emissions are emitted and transported within the regional climate model (RegCM4) and the direct radiative effect is calculated. Additionally, we simulate the rupture of coarse pollen grains into finer particles by adding a second size mode for pollen emissions, which contributes to the shortwave radiative forcing and also has an indirect effect on climate.
NASA Astrophysics Data System (ADS)
Babb, Grace
2017-11-01
This work aims to produce a higher fidelity model of the blades for NASA's X-57 all electric propeller driven experimental aircraft. This model will, in turn, allow for more accurate calculations of the thrust each propeller can generate. This work uses computational fluid dynamics (CFD) to first analyze the propeller blades as a series of 11 differently shaped airfoils and calculate, among other things, the coefficients for lift and drag associated with each airfoil at different angles of attack. OpenFOAM-a C + + library that can be used to create series of applications for pre-processing, solving, and post-processing-is one of the primary tools utilized in these calculations. By comparing the data OpenFOAM generates about the NACA 23012 airfoil with existing experimental data about the NACA 23012 airfoil, the reliability of our model is measured and verified. A trustworthy model can then be used to generate more data and sent to NASA to aid in the design of the actual aircraft.
Spaeder, M C; Fackler, J C
2012-04-01
Respiratory syncytial virus (RSV) is the most common cause of documented viral respiratory infections, and the leading cause of hospitalization, in young children. We performed a retrospective time-series analysis of all patients aged <18 years with laboratory-confirmed RSV within a network of multiple affiliated academic medical institutions. Forecasting models of weekly RSV incidence for the local community, inpatient paediatric hospital and paediatric intensive-care unit (PICU) were created. Ninety-five percent confidence intervals calculated around our models' 2-week forecasts were accurate to ±9·3, ±7·5 and ±1·5 cases/week for the local community, inpatient hospital and PICU, respectively. Our results suggest that time-series models may be useful tools in forecasting the burden of RSV infection at the local and institutional levels, helping communities and institutions to optimize distribution of resources based on the changing burden and severity of illness in their respective communities.
A SPECT system simulator built on the SolidWorks TM 3D-Design package.
Li, Xin; Furenlid, Lars R
2014-08-17
We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design workflow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorks TM -created stereolithography (.STL) representations with a full complement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorks TM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.
Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective
Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward
2015-01-01
The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children’s Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children’s Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907
A SPECT system simulator built on the SolidWorksTM 3D design package
NASA Astrophysics Data System (ADS)
Li, Xin; Furenlid, Lars R.
2014-09-01
We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design work flow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full com- plement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.
Random forest models to predict aqueous solubility.
Palmer, David S; O'Boyle, Noel M; Glen, Robert C; Mitchell, John B O
2007-01-01
Random Forest regression (RF), Partial-Least-Squares (PLS) regression, Support Vector Machines (SVM), and Artificial Neural Networks (ANN) were used to develop QSPR models for the prediction of aqueous solubility, based on experimental data for 988 organic molecules. The Random Forest regression model predicted aqueous solubility more accurately than those created by PLS, SVM, and ANN and offered methods for automatic descriptor selection, an assessment of descriptor importance, and an in-parallel measure of predictive ability, all of which serve to recommend its use. The prediction of log molar solubility for an external test set of 330 molecules that are solid at 25 degrees C gave an r2 = 0.89 and RMSE = 0.69 log S units. For a standard data set selected from the literature, the model performed well with respect to other documented methods. Finally, the diversity of the training and test sets are compared to the chemical space occupied by molecules in the MDL drug data report, on the basis of molecular descriptors selected by the regression analysis.
Kolb, Joseph J
2015-09-01
America's multiethnic composition can create havoc in answering emergency calls and translating patient information on scene. It is incumbent upon EMS services to have a translation strategy and protocol in place to mitigate delays in providing emergency care. While digital translation programs may be of assistance, exercise caution in ensuring information is accurately downloaded to obtain an accurate translation.
NASA Astrophysics Data System (ADS)
Riboust, Philippe; Thirel, Guillaume; Le Moine, Nicolas; Ribstein, Pierre
2016-04-01
A better knowledge of the accumulated snow on the watersheds will help flood forecasting centres and hydro-power companies to predict the amount of water released during spring snowmelt. Since precipitations gauges are sparse at high elevations and integrative measurements of the snow accumulated on watershed surface are hard to obtain, using snow models is an adequate way to estimate snow water equivalent (SWE) on watersheds. In addition to short term prediction, simulating accurately SWE with snow models should have many advantages. Validating the snow module on both SWE and snowmelt should give a more reliable model for climate change studies or regionalization for ungauged watersheds. The aim of this study is to create a new snow module, which has a structure that allows the use of measured snow data for calibration or assimilation. Energy balance modelling seems to be the logical choice for designing a model in which internal variables, such as SWE, could be compared to observations. Physical models are complex, needing high computational resources and many different types of inputs that are not widely measured at meteorological stations. At the opposite, simple conceptual degree-day models offer to simulate snowmelt using only temperature and precipitation as inputs with fast computing. Its major drawback is to be empirical, i.e. not taking into account all of the processes of the energy balance, which makes this kind of model more difficult to use when willing to compare SWE to observed measurements. In order to reach our objectives, we created a snow model structured by a simplified energy balance where each of the processes is empirically parameterized in order to be calculated using only temperature, precipitation and cloud cover variables. This model's structure is similar to the one created by M.T. Walter (2005), where parameterizations from the literature were used to compute all of the processes of the energy balance. The conductive fluxes into the snowpack were modelled by using analytical solutions to the heat equation taking phase change into account. This approach has the advantage to use few forcing variables and to take into account all the processes of the energy balance. Indeed, the simulations should be quick enough to allow, for example, ensemble prediction or simulation of numerous basins, more easily than physical snow models. The snow module formulation has been completed and is in its validation phase using data from the experimental station of Col de Porte, Alpes, France. Data from the US SNOTEL product will be used in order to test the model structure on a larger scale and to test diverse calibration procedures, since the aim is to use it on a basin scale for discharge modelling purposes.
Certification of CFD heat transfer software for turbine blade analysis
NASA Technical Reports Server (NTRS)
Jordan, William A.
2004-01-01
Accurate modeling of heat transfer effects is a critical component of the Turbine Branch of the Turbomachinery and Propulsion Systems Division. Being able to adequately predict and model heat flux, coolant flows, and peak temperatures are necessary for the analysis of high pressure turbine blades. To that end, the primary goal of my internship this summer will be to certify the reliability of the CFD program GlennHT for the purpose of turbine blade heat transfer analysis. GlennHT is currently in use by the engineers in the Turbine Branch who use the FORTRAN 77 version of the code for analysis. The program, however, has been updated to a FORTRAN 90 version which is more robust than the older code. In order for the new code to be distributed for use, its reliability must first be certified. Over the course of my internship I will create and run test cases using the FORTRAN 90 version of GlennHT and compare the results to older cases which are known to be accurate, If the results of the new code match those of the sample cases then the newer version will be one step closer to certification for distribution. In order to complete these it will first be necessary to become familiar with operating a number of other programs. Among them are GridPro, which is used to create a grid mesh around a blade geometry, and FieldView, whose purpose is to graphically display the results from the GlennHT program. Once enough familiarity is established with these programs to render them useful, then the work of creating and running test scenarios will begin. The work is additionally complicated by a transition in computer hardware. Most of the working computers in the Turbine Branch are Silicon Graphics machines, which will soon be replaced by LINUX PC's. My project is one of the first to make use the new PC's. The change in system architecture however, has created several software related issues which have greatly increased the time and effort investments required by the project.Although complications with the project continue to arise, it is expected that the goal of my internship can still be achieved within the remaining time period. Critical steps have been achieved and test scenarios can now be designed and run. At the completion of my internship, the FORTRAN 90 version of GlennHT should be well on its way to certification.
Dutch population specific sex estimation formulae using the proximal femur.
Colman, K L; Janssen, M C L; Stull, K E; van Rijn, R R; Oostra, R J; de Boer, H H; van der Merwe, A E
2018-05-01
Sex estimation techniques are frequently applied in forensic anthropological analyses of unidentified human skeletal remains. While morphological sex estimation methods are able to endure population differences, the classification accuracy of metric sex estimation methods are population-specific. No metric sex estimation method currently exists for the Dutch population. The purpose of this study is to create Dutch population specific sex estimation formulae by means of osteometric analyses of the proximal femur. Since the Netherlands lacks a representative contemporary skeletal reference population, 2D plane reconstructions, derived from clinical computed tomography (CT) data, were used as an alternative source for a representative reference sample. The first part of this study assesses the intra- and inter-observer error, or reliability, of twelve measurements of the proximal femur. The technical error of measurement (TEM) and relative TEM (%TEM) were calculated using 26 dry adult femora. In addition, the agreement, or accuracy, between the dry bone and CT-based measurements was determined by percent agreement. Only reliable and accurate measurements were retained for the logistic regression sex estimation formulae; a training set (n=86) was used to create the models while an independent testing set (n=28) was used to validate the models. Due to high levels of multicollinearity, only single variable models were created. Cross-validated classification accuracies ranged from 86% to 92%. The high cross-validated classification accuracies indicate that the developed formulae can contribute to the biological profile and specifically in sex estimation of unidentified human skeletal remains in the Netherlands. Furthermore, the results indicate that clinical CT data can be a valuable alternative source of data when representative skeletal collections are unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Paolini, P.; Forti, G.; Catalani, G.; Lucchetti, S.; Menghini, A.; Mirandola, A.; Pistacchio, S.; Porzia, U.; Roberti, M.
2016-04-01
High Quality survey models, realized by multiple Low Cost methods and technologies, as a container to sharing Cultural and Archival Heritage, this is the aim guiding our research, here described in its primary applications. The SAPIENZA building, a XVI century masterpiece that represented the first unified headquarters of University in Rome, plays since year 1936, when the University moved to its newly edified campus, the role of the main venue for the State Archives. By the collaboration of a group of students of the Architecture Faculty, some integrated survey methods were applied on the monument with success. The beginning was the topographic survey, creating a reference on ground and along the monument for the upcoming applications, a GNNS RTK survey followed georeferencing points on the internal courtyard. Dense stereo matching photogrammetry is nowadays an accepted method for generating 3D survey models, accurate and scalable; it often substitutes 3D laser scanning for its low cost, so that it became our choice. Some 360° shots were planned for creating panoramic views of the double portico from the courtyard, plus additional single shots of some lateral spans and of pillars facing the court, as a single operation with a double finality: to create linked panotours with hotspots to web-linked databases, and 3D textured and georeferenced surface models, allowing to study the harmonic proportions of the classical architectural order. The use of free web Gis platforms, to load the work in Google Earth and the realization of low cost 3D prototypes of some representative parts, has been even performed.
Integration of multiple theories for the simulation of laser interference lithography processes
NASA Astrophysics Data System (ADS)
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-01
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
A Tamarisk Habitat Suitability Map for the Continental US
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Jernevich, Catherine S.; Ullah, Asad; Cai, Weijie; Pedelty, Jeffrey A.; Gentle, Jim; Stohlgren, Thomas J.; Schnase, John L.
2005-01-01
This paper presents a national-scale map of habitat suitability for a high-priority invasive species, Tamarisk (Tamarisk spp., salt cedar). We successfully integrate satellite data and tens of thousands of field sampling points through logistic regression modeling to create a habitat suitability map that is 90% accurate. This interagency effort uses field data collected and coordinated through the US Geological Survey and nation-wide environmental data layers derived from NASA s MODerate Resolution Imaging Spectroradiometer (MODIS). We demonstrate the utilization of the map by ranking the lower 48 US states (and the District of Columbia) based upon their absolute, as well as proportional, areas of highly likely and moderately likely habitat for Tamarisk. The interagency effort and modeling approach presented here could be applied to map other harmful species in the US and globally.
Integration of multiple theories for the simulation of laser interference lithography processes.
Lin, Te-Hsun; Yang, Yin-Kuang; Fu, Chien-Chung
2017-11-24
The periodic structure of laser interference lithography (LIL) fabrication is superior to other lithography technologies. In contrast to traditional lithography, LIL has the advantages of being a simple optical system with no mask requirements, low cost, high depth of focus, and large patterning area in a single exposure. Generally, a simulation pattern for the periodic structure is obtained through optical interference prior to its fabrication through LIL. However, the LIL process is complex and combines the fields of optical and polymer materials; thus, a single simulation theory cannot reflect the real situation. Therefore, this research integrates multiple theories, including those of optical interference, standing waves, and photoresist characteristics, to create a mathematical model for the LIL process. The mathematical model can accurately estimate the exposure time and reduce the LIL process duration through trial and error.
From experimental imaging techniques to virtual embryology.
Weninger, Wolfgang J; Tassy, Olivier; Darras, Sébastien; Geyer, Stefan H; Thieffry, Denis
2004-01-01
Modern embryology increasingly relies on descriptive and functional three dimensional (3D) and four dimensional (4D) analysis of physically, optically, or virtually sectioned specimens. To cope with the technical requirements, new methods for high detailed in vivo imaging, as well as the generation of high resolution digital volume data sets for the accurate visualisation of transgene activity and gene product presence, in the context of embryo morphology, were recently developed and are under construction. These methods profoundly change the scientific applicability, appearance and style of modern embryo representations. In this paper, we present an overview of the emerging techniques to create, visualise and administrate embryo representations (databases, digital data sets, 3-4D embryo reconstructions, models, etc.), and discuss the implications of these new methods on the work of modern embryologists, including, research, teaching, the selection of specific model organisms, and potential collaborators.