Sample records for developing accurate models

  1. Development of an Anatomically Accurate Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies

    DTIC Science & Technology

    2017-02-01

    ARL-TR-7945 ● FEB 2017 US Army Research Laboratory Development of an Anatomically Accurate Finite Element Human Ocular Globe...ARL-TR-7945 ● FEB 2017 US Army Research Laboratory Development of an Anatomically Accurate Finite Element Human Ocular Globe Model... Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  2. An accurate model for predicting high frequency noise of nanoscale NMOS SOI transistors

    NASA Astrophysics Data System (ADS)

    Shen, Yanfei; Cui, Jie; Mohammadi, Saeed

    2017-05-01

    A nonlinear and scalable model suitable for predicting high frequency noise of N-type Metal Oxide Semiconductor (NMOS) transistors is presented. The model is developed for a commercial 45 nm CMOS SOI technology and its accuracy is validated through comparison with measured performance of a microwave low noise amplifier. The model employs the virtual source nonlinear core and adds parasitic elements to accurately simulate the RF behavior of multi-finger NMOS transistors up to 40 GHz. For the first time, the traditional long-channel thermal noise model is supplemented with an injection noise model to accurately represent the noise behavior of these short-channel transistors up to 26 GHz. The developed model is simple and easy to extract, yet very accurate.

  3. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    NASA Technical Reports Server (NTRS)

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  4. Studies of HZE particle interactions and transport for space radiation protection purposes

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn

    1987-01-01

    The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.

  5. Investigation of in-body path loss in different human subjects for localization of capsule endoscope.

    PubMed

    Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk

    2015-01-01

    Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.

  6. Accurate Induction Energies for Small Organic Molecules. 2. Development and Testing of Distributed Polarizability Models against SAPT(DFT) Energies.

    PubMed

    Misquitta, Alston J; Stone, Anthony J; Price, Sarah L

    2008-01-01

    In part 1 of this two-part investigation we set out the theoretical basis for constructing accurate models of the induction energy of clusters of moderately sized organic molecules. In this paper we use these techniques to develop a variety of accurate distributed polarizability models for a set of representative molecules that include formamide, N-methyl propanamide, benzene, and 3-azabicyclo[3.3.1]nonane-2,4-dione. We have also explored damping, penetration, and basis set effects. In particular, we have provided a way to treat the damping of the induction expansion. Different approximations to the induction energy are evaluated against accurate SAPT(DFT) energies, and we demonstrate the accuracy of our induction models on the formamide-water dimer.

  7. Modeling cotton (Gossypium spp) leaves and canopy using computer aided geometric design (CAGD)

    USDA-ARS?s Scientific Manuscript database

    The goal of this research is to develop a geometrically accurate model of cotton crop canopies for exploring changes in canopy microenvironment and physiological function with leaf structure. We develop an accurate representation of the leaves, including changes in three-dimensional folding and orie...

  8. Sexing California gulls using morphometrics and discriminant function analysis

    USGS Publications Warehouse

    Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.

    2010-01-01

    A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.

  9. Importance of Nuclear Physics to NASA's Space Missions

    NASA Technical Reports Server (NTRS)

    Tripathi, R. K.; Wilson, J. W.; Cucinotta, F. A.

    2001-01-01

    We show that nuclear physics is extremely important for accurate risk assessments for space missions. Due to paucity of experimental input radiation interaction information it is imperative to develop reliable accurate models for the interaction of radiation with matter. State-of-the-art nuclear cross sections models have been developed at the NASA Langley Research center and are discussed.

  10. Past, present and prospect of an Artificial Intelligence (AI) based model for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher

    2016-10-01

    An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.

  11. Aircraft Dynamic Modeling in Turbulence

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunninham, Kevin

    2012-01-01

    A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.

  12. Tools for Accurate and Efficient Analysis of Complex Evolutionary Mechanisms in Microbial Genomes. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakhleh, Luay

    I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less

  13. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    PubMed

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  14. Accurate Treatment of Collision and Water-Delivery in Models of Terrestrial Planet Formation

    NASA Astrophysics Data System (ADS)

    Haghighipour, N.; Maindl, T. I.; Schaefer, C. M.; Wandel, O.

    2017-08-01

    We have developed a comprehensive approach in simulating collisions and growth of embryos to terrestrial planets where we use a combination of SPH and N-body codes to model collisions and the transfer of water and chemical compounds accurately.

  15. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  16. Validation of High-Fidelity CFD/CAA Framework for Launch Vehicle Acoustic Environment Simulation against Scale Model Test Data

    NASA Technical Reports Server (NTRS)

    Liever, Peter A.; West, Jeffrey S.

    2016-01-01

    A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.

  17. A physical multifield model predicts the development of volume and structure in the human brain

    NASA Astrophysics Data System (ADS)

    Rooij, Rijk de; Kuhl, Ellen

    2018-03-01

    The prenatal development of the human brain is characterized by a rapid increase in brain volume and a development of a highly folded cortex. At the cellular level, these events are enabled by symmetric and asymmetric cell division in the ventricular regions of the brain followed by an outwards cell migration towards the peripheral regions. The role of mechanics during brain development has been suggested and acknowledged in past decades, but remains insufficiently understood. Here we propose a mechanistic model that couples cell division, cell migration, and brain volume growth to accurately model the developing brain between weeks 10 and 29 of gestation. Our model accurately predicts a 160-fold volume increase from 1.5 cm3 at week 10 to 235 cm3 at week 29 of gestation. In agreement with human brain development, the cortex begins to form around week 22 and accounts for about 30% of the total brain volume at week 29. Our results show that cell division and coupling between cell density and volume growth are essential to accurately model brain volume development, whereas cell migration and diffusion contribute mainly to the development of the cortex. We demonstrate that complex folding patterns, including sinusoidal folds and creases, emerge naturally as the cortex develops, even for low stiffness contrasts between the cortex and subcortex.

  18. The dynamics of turbulent premixed flames: Mechanisms and models for turbulence-flame interaction

    NASA Astrophysics Data System (ADS)

    Steinberg, Adam M.

    The use of turbulent premixed combustion in engines has been garnering renewed interest due to its potential to reduce NOx emissions. However there are many aspects of turbulence-flame interaction that must be better understood before such flames can be accurately modeled. The focus of this dissertation is to develop an improved understanding for the manner in which turbulence interacts with a premixed flame in the 'thin flamelet regime'. To do so, two new diagnostics were developed and employed in a turbulent slot Bunsen flame. These diagnostics, Cinema-Stereoscopic Particle Image Velocimetry and Orthogonal-Plane Cinema-Stereoscopic Particle Image Velocimetry, provided temporally resolved velocity and flame surface measurements in two- and three-dimensions with rates of up to 3 kHz and spatial resolutions as low as 280 mum. Using these measurements, the mechanisms with which turbulence generates flame surface area were studied. It was found that the previous concept that flame stretch is characterized by counter-rotating vortex pairs does not accurately describe real turbulence-flame interactions. Analysis of the experimental data showed that the straining of the flame surface is determined by coherent structures of fluid dynamic strain rate, while the wrinkling is caused by vortical structures. Furthermore, it was shown that the canonical vortex pair configuration is not an accurate reflection of the real interaction geometry. Hence, models developed based on this geometry are unlikely to be accurate. Previous models for the strain rate, curvature stretch rate, and turbulent burning velocity were evaluated. It was found that the previous models did not accurately predict the measured data for a variety of reasons: the assumed interaction geometries did not encompass enough possibilities to describe the possible effects of real turbulence, the turbulence was not properly characterized, and the transport of flame surface area was not always considered. New models therefore were developed that accurately reflect real turbulence-flame interactions and agree with the measured data. These can be implemented in Large Eddy Simulations to provide improved modeling of turbulence-flame interaction.

  19. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  20. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  1. A Thermo-Poromechanics Finite Element Model for Predicting Arterial Tissue Fusion

    NASA Astrophysics Data System (ADS)

    Fankell, Douglas P.

    This work provides modeling efforts and supplemental experimental work performed towards the ultimate goal of modeling heat transfer, mass transfer, and deformation occurring in biological tissue, in particular during arterial fusion and cutting. Developing accurate models of these processes accomplishes two goals. First, accurate models would enable engineers to design devices to be safer and less expensive. Second, the mechanisms behind tissue fusion and cutting are widely unknown; models with the ability to accurately predict physical phenomena occurring in the tissue will allow for insight into the underlying mechanisms of the processes. This work presents three aims and the efforts in achieving them, leading to an accurate model of tissue fusion and more broadly the thermo-poromechanics (TPM) occurring within biological tissue. Chapters 1 and 2 provide the motivation for developing accurate TPM models of biological tissue and an overview of previous modeling efforts. In Chapter 3, a coupled thermo-structural finite element (FE) model with the ability to predict arterial cutting is offered. From the work presented in Chapter 3, it became obvious a more detailed model was needed. Chapter 4 meets this need by presenting small strain TPM theory and its implementation in an FE code. The model is then used to simulate thermal tissue fusion. These simulations show the model's promise in predicting the water content and temperature of arterial wall tissue during the fusion process, but it is limited by its small deformation assumptions. Chapters 5-7 attempt to address this limitation by developing and implementing a large deformation TPM FE model. Chapters 5, 6, and 7 present a thermodynamically consistent, large deformation TPM FE model and its ability to simulate tissue fusion. Ultimately, this work provides several methods of simulating arterial tissue fusion and the thermo-poromechanics of biological tissue. It is the first work, to the author's knowledge, to simulate the fully coupled TPM of biological tissue and the first to present a fully coupled large deformation TPM FE model. In doing so, a stepping stone for more advanced modeling of biological tissue has been laid.

  2. Developing a case-mix model for PPS.

    PubMed

    Goldberg, H B; Delargy, D

    2000-01-01

    Agencies are pinning hopes for success under PPS on an accurate case-mix adjustor. The Health Care Financing Administration (HCFA) tasked Abt Associates Inc. to develop a system to accurately predict the volume and type of home health services each patient requires, based on his or her characteristics (not the service actually received). HCFA wanted this system to be feasible, clinically logical, and valid and accurate. Authors Goldberg and Delargy explain how Abt approached this daunting task.

  3. Validation of High-Fidelity CFD/CAA Framework for Launch Vehicle Acoustic Environment Simulation against Scale Model Test Data

    NASA Technical Reports Server (NTRS)

    Liever, Peter A.; West, Jeffrey S.; Harris, Robert E.

    2016-01-01

    A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate Discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured mesh Discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.

  4. A physical-based gas-surface interaction model for rarefied gas flow simulation

    NASA Astrophysics Data System (ADS)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  5. Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Brandon, Jay M.; Morelli, Eugene A.

    2014-01-01

    Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.

  6. System Identification and Verification of Rotorcraft UAVs

    NASA Astrophysics Data System (ADS)

    Carlton, Zachary M.

    The task of a controls engineer is to design and implement control logic. To complete this task, it helps tremendously to have an accurate model of the system to be controlled. Obtaining a very accurate system model is not a trivial one, as much time and money is usually associated with the development of such a model. A typical physics based approach can require hundreds of hours of flight time. In an iterative process the model is tuned in such a way that it accurately models the physical system's response. This process becomes even more complicated for unstable and highly non-linear systems such as the dynamics of rotorcraft. An alternate approach to solving this problem is to extract an accurate model by analyzing the frequency response of the system. This process involves recording the system's responses for a frequency range of input excitations. From this data, an accurate system model can then be deduced. Furthermore, it has been shown that with use of the software package CIFER® (Comprehensive Identification from FrEquency Responses), this process can both greatly reduce the cost of modeling a dynamic system and produce very accurate results. The topic of this thesis is to apply CIFER® to a quadcopter to extract a system model for the flight condition of hover. The quadcopter itself is comprised of off-the-shelf components with a Pixhack flight controller board running open source Ardupilot controller logic. In this thesis, both the closed and open loop systems are identified. The model is next compared to dissimilar flight data and verified in the time domain. Additionally, the ESC (Electronic Speed Controller) motor/rotor subsystem, which is comprised of all the vehicle's actuators, is also identified. This process required the development of a test bench environment, which included a GUI (Graphical User Interface), data pre and post processing, as well as the augmentation of the flight controller source code. This augmentation of code allowed for proper data logging rates of all needed parameters.

  7. Development of a 3D patient-specific planning platform for interstitial and transurethral ultrasound thermal therapy

    NASA Astrophysics Data System (ADS)

    Prakash, Punit; Diederich, Chris J.

    2010-03-01

    Interstitial and transurethral catheter-based ultrasound devices are under development for treatment of prostate cancer and BPH, uterine fibroids, liver tumors and other soft tissue disease. Accurate 3D thermal modeling is essential for designing site-specific applicators, exploring treatment delivery strategies, and integration of patient-specific treatment planning of thermal ablations. We are developing a comprehensive 3D modeling and treatment planning platform for ultrasound ablation of tissue using catheter-based applicators. We explored the applicability of assessing thermal effects in tissue using critical temperature, thermal dose and Arrhenius thermal damage thresholds and performed a comparative analysis of dynamic tissue properties critical to accurate modeling. We used the model to assess the feasibility of automatic feedback control with MR thermometry, and demonstrated the utility of the modeling platform for 3D patient-specific treatment planning. We have identified critical temperature, thermal dose and thermal damage thresholds for assessing treatment endpoint. Dynamic changes in tissue attenuation/absorption and perfusion must be included for accurate prediction of temperature profiles and extents of the ablation zone. Lastly, we demonstrated use of the modeling platform for patient-specific treatment planning.

  8. Estimating wildfire risk on a Mojave Desert landscape using remote sensing and field sampling

    USGS Publications Warehouse

    Van Linn, Peter F.; Nussear, Kenneth E.; Esque, Todd C.; DeFalco, Lesley A.; Inman, Richard D.; Abella, Scott R.

    2013-01-01

    Predicting wildfires that affect broad landscapes is important for allocating suppression resources and guiding land management. Wildfire prediction in the south-western United States is of specific concern because of the increasing prevalence and severe effects of fire on desert shrublands and the current lack of accurate fire prediction tools. We developed a fire risk model to predict fire occurrence in a north-eastern Mojave Desert landscape. First we developed a spatial model using remote sensing data to predict fuel loads based on field estimates of fuels. We then modelled fire risk (interactions of fuel characteristics and environmental conditions conducive to wildfire) using satellite imagery, our model of fuel loads, and spatial data on ignition potential (lightning strikes and distance to roads), topography (elevation and aspect) and climate (maximum and minimum temperatures). The risk model was developed during a fire year at our study landscape and validated at a nearby landscape; model performance was accurate and similar at both sites. This study demonstrates that remote sensing techniques used in combination with field surveys can accurately predict wildfire risk in the Mojave Desert and may be applicable to other arid and semiarid lands where wildfires are prevalent.

  9. Global Aerodynamic Modeling for Stall/Upset Recovery Training Using Efficient Piloted Flight Test Techniques

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.

    2013-01-01

    Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.

  10. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  11. Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers

    PubMed Central

    Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas

    2016-01-01

    A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496

  12. Individual Differences in Boys' and Girls' Timing and Tempo of Puberty: Modeling Development with Nonlinear Growth Models

    ERIC Educational Resources Information Center

    Marceau, Kristine; Ram, Nilam; Houts, Renate M.; Grimm, Kevin J.; Susman, Elizabeth J.

    2011-01-01

    Pubertal development is a nonlinear process progressing from prepubescent beginnings through biological, physical, and psychological changes to full sexual maturity. To tether theoretical concepts of puberty with sophisticated longitudinal, analytical models capable of articulating pubertal development more accurately, we used nonlinear…

  13. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  14. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  15. Reliability of Degree-Day Models to Predict the Development Time of Plutella xylostella (L.) under Field Conditions.

    PubMed

    Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A

    2015-12-01

    The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.

  16. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  17. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  18. On the Development of Parameterized Linear Analytical Longitudinal Airship Models

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.

    2008-01-01

    In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.

  19. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF.more » We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.« less

  20. A PHYSIOLOGICALLY-BASED PHARMACOKINETIC MODEL FOR TRICHLOROETHYLENE WITH SPECIFICITY FOR THE LONG EVANS RAT

    EPA Science Inventory

    A PBPK model for TCE with specificity for the male LE rat that accurately predicts TCE tissue time-course data has not been developed, although other PBPK models for TCE exist. Development of such a model was the present aim. The PBPK model consisted of 5 compartments: fat; slowl...

  1. Constructing high-accuracy intermolecular potential energy surface with multi-dimension Morse/Long-Range model

    NASA Astrophysics Data System (ADS)

    Zhai, Yu; Li, Hui; Le Roy, Robert J.

    2018-04-01

    Spectroscopically accurate Potential Energy Surfaces (PESs) are fundamental for explaining and making predictions of the infrared and microwave spectra of van der Waals (vdW) complexes, and the model used for the potential energy function is critically important for providing accurate, robust and portable analytical PESs. The Morse/Long-Range (MLR) model has proved to be one of the most general, flexible and accurate one-dimensional (1D) model potentials, as it has physically meaningful parameters, is flexible, smooth and differentiable everywhere, to all orders and extrapolates sensibly at both long and short ranges. The Multi-Dimensional Morse/Long-Range (mdMLR) potential energy model described herein is based on that 1D MLR model, and has proved to be effective and accurate in the potentiology of various types of vdW complexes. In this paper, we review the current status of development of the mdMLR model and its application to vdW complexes. The future of the mdMLR model is also discussed. This review can serve as a tutorial for the construction of an mdMLR PES.

  2. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  3. Simulating Aerosol Indirect Effects with Improved Aerosol-Cloud- Precipitation Representations in a Coupled Regional Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yang; Leung, L. Ruby; Fan, Jiwen

    This is a collaborative project among North Carolina State University, Pacific Northwest National Laboratory, and Scripps Institution of Oceanography, University of California at San Diego to address the critical need for an accurate representation of aerosol indirect effect in climate and Earth system models. In this project, we propose to develop and improve parameterizations of aerosol-cloud-precipitation feedbacks in climate models and apply them to study the effect of aerosols and clouds on radiation and hydrologic cycle. Our overall objective is to develop, improve, and evaluate parameterizations to enable more accurate simulations of these feedbacks in high resolution regional and globalmore » climate models.« less

  4. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  5. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Dini, Paolo; Maughmer, Mark D.

    1989-01-01

    The goal is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. Toward this end, a computational model of the separation bubble was developed and incorporated into the Eppler and Somers airfoil design and analysis program. Thus far, the focus of the research was limited to the development of a model which can accurately predict situations in which the interaction between the bubble and the inviscid velocity distribution is weak, the so-called short bubble. A summary of the research performed in the past nine months is presented. The bubble model in its present form is then described. Lastly, the performance of this model in predicting bubble characteristics is shown for a few cases.

  6. Trunk density profile estimates from dual X-ray absorptiometry.

    PubMed

    Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A

    2008-01-01

    Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.

  7. Verification of a 2 kWe Closed-Brayton-Cycle Power Conversion System Mechanical Dynamics Model

    NASA Technical Reports Server (NTRS)

    Ludwiczak, Damian R.; Le, Dzu K.; McNelis, Anne M.; Yu, Albert C.; Samorezov, Sergey; Hervol, Dave S.

    2005-01-01

    Vibration test data from an operating 2 kWe closed-Brayton-cycle (CBC) power conversion system (PCS) located at the NASA Glenn Research Center was used for a comparison with a dynamic disturbance model of the same unit. This effort was performed to show that a dynamic disturbance model of a CBC PCS can be developed that can accurately predict the torque and vibration disturbance fields of such class of rotating machinery. The ability to accurately predict these disturbance fields is required before such hardware can be confidently integrated onto a spacecraft mission. Accurate predictions of CBC disturbance fields will be used for spacecraft control/structure interaction analyses and for understanding the vibration disturbances affecting the scientific instrumentation onboard. This paper discusses how test cell data measurements for the 2 kWe CBC PCS were obtained, the development of a dynamic disturbance model used to predict the transient torque and steady state vibration fields of the same unit, and a comparison of the two sets of data.

  8. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  9. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever

    PubMed Central

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J.; Scott, Dana P.; Feldmann, Heinz; Ebihara, Hideki

    2016-01-01

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF. PMID:27976688

  10. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.

    PubMed

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki

    2016-12-15

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.

  11. Modeling fuel succession

    USGS Publications Warehouse

    Davis, Brett; Van Wagtendonk, Jan W.; Beck, Jen; van Wagtendonk, Kent A.

    2009-01-01

    Surface fuels data are of critical importance for supporting fire incident management, risk assessment, and fuel management planning, but the development of surface fuels data can be expensive and time consuming. The data development process is extensive, generally beginning with acquisition of remotely sensed spatial data such as aerial photography or satellite imagery (Keane and others 2001). The spatial vegetation data are then crosswalked to a set of fire behavior fuel models that describe the available fuels (the burnable portions of the vegetation) (Anderson 1982, Scott and Burgan 2005). Finally, spatial fuels data are used as input to tools such as FARSITE and FlamMap to model current and potential fire spread and behavior (Finney 1998, Finney 2006). The capture date of the remotely sensed data defines the period for which the vegetation, and, therefore, fuels, data are most accurate. The more time that passes after the capture date, the less accurate the data become due to vegetation growth and processes such as fire. Subsequently, the results of any fire simulation based on these data become less accurate as the data age. Because of the amount of labor and expense required to develop these data, keeping them updated may prove to be a challenge. In this article, we describe the Sierra Nevada Fuel Succession Model, a modeling tool that can quickly and easily update surface fuel models with a minimum of additional input data. Although it was developed for use by Yosemite, Sequoia, and Kings Canyon National Parks, it is applicable to much of the central and southern Sierra Nevada. Furthermore, the methods used to develop the model have national applicability.

  12. Resources and Commitment as Critical Factors in the Development of "Gifted" Athletes

    ERIC Educational Resources Information Center

    Baker, Joseph; Cote, Jean

    2003-01-01

    Several sport-specific talent detection models have been developed over the last 30 years (Durand-Bush & Salmela, 2001). However, these models have failed in at least one important standard of judgment--accurately predicting who will develop into an elite level athlete. The authors believe that the WICS model presented by Robert Sternberg also…

  13. The Use of Problem-Solving Techniques to Develop Semiotic Declarative Knowledge Models about Magnetism and Their Role in Learning for Prospective Science Teachers

    ERIC Educational Resources Information Center

    Ismail, Yilmaz

    2016-01-01

    This study aims to develop a semiotic declarative knowledge model, which is a positive constructive behavior model that systematically facilitates understanding in order to ensure that learners think accurately and ask the right questions about a topic. The data used to develop the experimental model were obtained using four measurement tools…

  14. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  15. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay or compromise dormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budburst dates only, with no information on the dormancy break date because this information is very scarce. We evaluated the efficiency of a set of process-based phenological models to accurately predict the dormancy break dates of four fruit trees. Our results show that models calibrated solely with flowering or budburst dates do not accurately predict the dormancy break date. Providing dormancy break date for the model parameterization results in much more accurate simulation of this latter, with however a higher error than that on flowering or bud break dates. But most importantly, we show also that models not calibrated with dormancy break dates can generate significant differences in forecasted flowering or bud break dates when using climate scenarios. Our results claim for the urgent need of massive measurements of dormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.

  16. DEVELOPING SITE-SPECIFIC MODELS FOR FORECASTING BACTERIA LEVELS AT COASTAL BEACHES

    EPA Science Inventory

    The U.S.Beaches Environmental Assessment and Coastal Health Act of 2000 authorizes studies of pathogen indicators in coastal recreation waters that develop appropriate, accurate, expeditious, and cost-effective methods (including predictive models) for quantifying pathogens in co...

  17. GlobalSoilMap France: High-resolution spatial modelling the soils of France up to two meter depth.

    PubMed

    Mulder, V L; Lacoste, M; Richer-de-Forges, A C; Arrouays, D

    2016-12-15

    This work presents the first GlobalSoilMap (GSM) products for France. We developed an automatic procedure for mapping the primary soil properties (clay, silt, sand, coarse elements, pH, soil organic carbon (SOC), cation exchange capacity (CEC) and soil depth). The procedure employed a data-mining technique and a straightforward method for estimating the 90% confidence intervals (CIs). The most accurate models were obtained for pH, sand and silt. Next, CEC, clay and SOC were found reasonably accurate predicted. Coarse elements and soil depth were the least accurate of all models. Overall, all models were considered robust; important indicators for this were 1) the small difference in model diagnostics between the calibration and cross-validation set, 2) the unbiased mean predictions, 3) the smaller spatial structure of the prediction residuals in comparison to the observations and 4) the similar performance compared to other developed GlobalSoilMap products. Nevertheless, the confidence intervals (CIs) were rather wide for all soil properties. The median predictions became less reliable with increasing depth, as indicated by the increase of CIs with depth. In addition, model accuracy and the corresponding CIs varied depending on the soil variable of interest, soil depth and geographic location. These findings indicated that the CIs are as informative as the model diagnostics. In conclusion, the presented method resulted in reasonably accurate predictions for the majority of the soil properties. End users can employ the products for different purposes, as was demonstrated with some practical examples. The mapping routine is flexible for cloud-computing and provides ample opportunity to be further developed when desired by its users. This allows regional and international GSM partners with fewer resources to develop their own products or, otherwise, to improve the current routine and work together towards a robust high-resolution digital soil map of the world. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Investigation of representing hysteresis in macroscopic models of two-phase flow in porous media using intermediate scale experimental data

    NASA Astrophysics Data System (ADS)

    Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; Gonzalez-Nicolas, Ana; Illangasekare, Tissa

    2017-01-01

    Incorporating hysteresis into models is important to accurately capture the two phase flow behavior when porous media systems undergo cycles of drainage and imbibition such as in the cases of injection and post-injection redistribution of CO2 during geological CO2 storage (GCS). In the traditional model of two-phase flow, existing constitutive models that parameterize the hysteresis associated with these processes are generally based on the empirical relationships. This manuscript presents development and testing of mathematical hysteretic capillary pressure—saturation—relative permeability models with the objective of more accurately representing the redistribution of the fluids after injection. The constitutive models are developed by relating macroscopic variables to basic physics of two-phase capillary displacements at pore-scale and void space distribution properties. The modeling approach with the developed constitutive models with and without hysteresis as input is tested against some intermediate-scale flow cell experiments to test the ability of the models to represent movement and capillary trapping of immiscible fluids under macroscopically homogeneous and heterogeneous conditions. The hysteretic two-phase flow model predicted the overall plume migration and distribution during and post injection reasonably well and represented the postinjection behavior of the plume more accurately than the nonhysteretic models. Based on the results in this study, neglecting hysteresis in the constitutive models of the traditional two-phase flow theory can seriously overpredict or underpredict the injected fluid distribution during post-injection under both homogeneous and heterogeneous conditions, depending on the selected value of the residual saturation in the nonhysteretic models.

  19. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Tangible Models and Haptic Representations Aid Learning of Molecular Biology Concepts

    ERIC Educational Resources Information Center

    Johannes, Kristen; Powers, Jacklyn; Couper, Lisa; Silberglitt, Matt; Davenport, Jodi

    2016-01-01

    Can novel 3D models help students develop a deeper understanding of core concepts in molecular biology? We adapted 3D molecular models, developed by scientists, for use in high school science classrooms. The models accurately represent the structural and functional properties of complex DNA and Virus molecules, and provide visual and haptic…

  1. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  2. Two dimensional model for coherent synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Huang, Chengkun; Kwan, Thomas J. T.; Carlsten, Bruce E.

    2013-01-01

    Understanding coherent synchrotron radiation (CSR) effects in a bunch compressor requires an accurate model accounting for the realistic beam shape and parameters. We extend the well-known 1D CSR analytic model into two dimensions and develop a simple numerical model based on the Liénard-Wiechert formula for the CSR field of a coasting beam. This CSR numerical model includes the 2D spatial dependence of the field in the bending plane and is accurate for arbitrary beam energy. It also removes the singularity in the space charge field calculation present in a 1D model. Good agreement is obtained with 1D CSR analytic result for free electron laser (FEL) related beam parameters but it can also give a more accurate result for low-energy/large spot size beams and off-axis/transient fields. This 2D CSR model can be used for understanding the limitation of various 1D models and for benchmarking fully electromagnetic multidimensional particle-in-cell simulations for self-consistent CSR modeling.

  3. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  4. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  5. Development of a High Resolution 3D Infant Stomach Model for Surgical Planning

    NASA Astrophysics Data System (ADS)

    Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.

    Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.

  6. Developing a dengue forecast model using machine learning: A case study in China.

    PubMed

    Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-10-01

    In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.

  7. Development of estrogen receptor beta binding prediction model using large sets of chemicals.

    PubMed

    Sakkiah, Sugunadevi; Selvaraj, Chandrabose; Gong, Ping; Zhang, Chaoyang; Tong, Weida; Hong, Huixiao

    2017-11-03

    We developed an ER β binding prediction model to facilitate identification of chemicals specifically bind ER β or ER α together with our previously developed ER α binding model. Decision Forest was used to train ER β binding prediction model based on a large set of compounds obtained from EADB. Model performance was estimated through 1000 iterations of 5-fold cross validations. Prediction confidence was analyzed using predictions from the cross validations. Informative chemical features for ER β binding were identified through analysis of the frequency data of chemical descriptors used in the models in the 5-fold cross validations. 1000 permutations were conducted to assess the chance correlation. The average accuracy of 5-fold cross validations was 93.14% with a standard deviation of 0.64%. Prediction confidence analysis indicated that the higher the prediction confidence the more accurate the predictions. Permutation testing results revealed that the prediction model is unlikely generated by chance. Eighteen informative descriptors were identified to be important to ER β binding prediction. Application of the prediction model to the data from ToxCast project yielded very high sensitivity of 90-92%. Our results demonstrated ER β binding of chemicals could be accurately predicted using the developed model. Coupling with our previously developed ER α prediction model, this model could be expected to facilitate drug development through identification of chemicals that specifically bind ER β or ER α .

  8. High-throughput migration modelling for estimating exposure to chemicals in food packaging in screening and prioritization tools.

    PubMed

    Ernstoff, Alexi S; Fantke, Peter; Huang, Lei; Jolliet, Olivier

    2017-11-01

    Specialty software and simplified models are often used to estimate migration of potentially toxic chemicals from packaging into food. Current models, however, are not suitable for emerging applications in decision-support tools, e.g. in Life Cycle Assessment and risk-based screening and prioritization, which require rapid computation of accurate estimates for diverse scenarios. To fulfil this need, we develop an accurate and rapid (high-throughput) model that estimates the fraction of organic chemicals migrating from polymeric packaging materials into foods. Several hundred step-wise simulations optimised the model coefficients to cover a range of user-defined scenarios (e.g. temperature). The developed model, operationalised in a spreadsheet for future dissemination, nearly instantaneously estimates chemical migration, and has improved performance over commonly used model simplifications. When using measured diffusion coefficients the model accurately predicted (R 2  = 0.9, standard error (S e ) = 0.5) hundreds of empirical data points for various scenarios. Diffusion coefficient modelling, which determines the speed of chemical transfer from package to food, was a major contributor to uncertainty and dramatically decreased model performance (R 2  = 0.4, S e  = 1). In all, this study provides a rapid migration modelling approach to estimate exposure to chemicals in food packaging for emerging screening and prioritization approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Using sensors to measure activity in people with stroke.

    PubMed

    Fulk, George D; Sazonov, Edward

    2011-01-01

    The purpose of this study was to determine the ability of a novel shoe-based sensor that uses accelerometers, pressure sensors, and pattern recognition with a support vector machine (SVM) to accurately identify sitting, standing, and walking postures in people with stroke. Subjects with stroke wore the shoe-based sensor while randomly assuming 3 main postures: sitting, standing, and walking. A SVM classifier was used to train and validate the data to develop individual and group models, which were tested for accuracy, recall, and precision. Eight subjects participated. Both individual and group models were able to accurately identify the different postures (99.1% to 100% individual models and 76.9% to 100% group models). Recall and precision were also high for both individual (0.99 to 1.00) and group (0.82 to 0.99) models. The unique combination of accelerometer and pressure sensors built into the shoe was able to accurately identify postures. This shoe sensor could be used to provide accurate information on community performance of activities in people with stroke as well as provide behavioral enhancing feedback as part of a telerehabilitation intervention.

  10. Stochastic Feedforward Control Technique

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1990-01-01

    Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.

  11. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    NASA Astrophysics Data System (ADS)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  12. COMPARING THE IMPAIRMENT PROFILES OF OLDER DRIVERS AND NON-DRIVERS: TOWARD THE DEVELOPMENT OF A FITNESS-TO-DRIVE MODEL

    PubMed Central

    Antin, Jonathan F.; Stanley, Laura M.; Guo, Feng

    2011-01-01

    The purpose of this research effort was to compare older driver and non-driver functional impairment profiles across some 60 assessment metrics in an initial effort to contribute to the development of fitness-to-drive assessment models. Of the metrics evaluated, 21 showed statistically significant differences, almost all favoring the drivers. Also, it was shown that a logistic regression model comprised of five of the assessment scores could completely and accurately separate the two groups. The results of this study imply that older drivers are far less functionally impaired than non-drivers of similar ages, and that a parsimonious model can accurately assign individuals to either group. With such models, any driver classified or diagnosed as a non-driver would be a strong candidate for further investigation and intervention. PMID:22058607

  13. Parameterized reduced-order models using hyper-dual numbers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less

  14. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  15. Extending the diffuse layer model of surface acidity behavior: I. Model development

    EPA Science Inventory

    Considerable disenchantment exists within the environmental research community concerning our current ability to accurately model surface-complexation-mediated low-porewater-concentration ionic contaminant partitioning with natural surfaces. Several authors attribute this unaccep...

  16. Nondestructive pavement evaluation using ILLI-PAVE based artificial neural network models.

    DOT National Transportation Integrated Search

    2008-09-01

    The overall objective in this research project is to develop advanced pavement structural analysis models for more accurate solutions with fast computation schemes. Soft computing and modeling approaches, specifically the Artificial Neural Network (A...

  17. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  18. Computational Work to Support FAP/SRW Variable-Speed Power-Turbine Development

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to document the work done to enable a NASA CFD code to model the transition on a blade. The purpose of the present work is to down-select a transition model that would allow the flow simulation of a Variable-Speed Power-Turbine (VSPT) to be accurately performed. The modeling is to be ultimately performed to also account for the blade row interactions and effect on transition and therefore accurate accounting for losses. The present work is limited to steady flows. The low Reynolds number k-omega model of Wilcox and a modified version of same will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k-omega model and its modified variant fail to simulate the transition with any degree of accuracy. A case is therefore made for more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored and the Walters and Leylek model which was thought to be in a more mature state of development is introduced and implemented in the Glenn-HT code. Two-dimensional flat plate results and three-dimensional results for flow over turbine blades and the resulting heat transfer and its transitional behavior are reported. It is shown that the transition simulation is much improved over the baseline k-omega model.

  19. Investigation of tDCS volume conduction effects in a highly realistic head model

    NASA Astrophysics Data System (ADS)

    Wagner, S.; Rampersad, S. M.; Aydin, Ü.; Vorwerk, J.; Oostendorp, T. F.; Neuling, T.; Herrmann, C. S.; Stegeman, D. F.; Wolters, C. H.

    2014-02-01

    Objective. We investigate volume conduction effects in transcranial direct current stimulation (tDCS) and present a guideline for efficient and yet accurate volume conductor modeling in tDCS using our newly-developed finite element (FE) approach. Approach. We developed a new, accurate and fast isoparametric FE approach for high-resolution geometry-adapted hexahedral meshes and tissue anisotropy. To attain a deeper insight into tDCS, we performed computer simulations, starting with a homogenized three-compartment head model and extending this step by step to a six-compartment anisotropic model. Main results. We are able to demonstrate important tDCS effects. First, we find channeling effects of the skin, the skull spongiosa and the cerebrospinal fluid compartments. Second, current vectors tend to be oriented towards the closest higher conducting region. Third, anisotropic WM conductivity causes current flow in directions more parallel to the WM fiber tracts. Fourth, the highest cortical current magnitudes are not only found close to the stimulation sites. Fifth, the median brain current density decreases with increasing distance from the electrodes. Significance. Our results allow us to formulate a guideline for volume conductor modeling in tDCS. We recommend to accurately model the major tissues between the stimulating electrodes and the target areas, while for efficient yet accurate modeling, an exact representation of other tissues is less important. Because for the low-frequency regime in electrophysiology the quasi-static approach is justified, our results should also be valid for at least low-frequency (e.g., below 100 Hz) transcranial alternating current stimulation.

  20. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  1. Path generation algorithm for UML graphic modeling of aerospace test software

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  2. Development and Application of Numerical Models for Reactive Flows

    DTIC Science & Technology

    1990-08-15

    Shear Layers: Ill. Effect of Convective Mach number Raafat H. Guirguis Abstract Model This paper addresses some of the fundamental We have made the...OTIC FILE COPY / 0 00 DTIC N~l 9 ELECTE D CbBA9-OI Development and Application of Numerical Models for Reactive Flows Berkeley Research Associates...Laboratory for Computa- tional Physics (LCP), hav focused on developing mathematical and computational models which accurately and efficiently describe the

  3. Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism

    NASA Astrophysics Data System (ADS)

    Parish, Eric J.; Duraisamy, Karthik

    2017-01-01

    This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.

  4. Detecting Responses of Loblolly Pine Stand Development to Site-Preparation Intensity: A Modeling Approach

    Treesearch

    Mingguang Xu; Timothy B. Harrington; M. Boyd Edwards

    1997-01-01

    Data from an existing site preparation experiment in the Georgia Piedmont were subjected to a modeling approach to analyze effects of site preparation intensity on stand development of loblolly pine (Pinus taeda L.) 5 to 12 years since treatment. An average stand height model that incorporated indicator variables for treatment provided an accurate...

  5. A Generic Nonlinear Aerodynamic Model for Aircraft

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2014-01-01

    A generic model of the aerodynamic coefficients was developed using wind tunnel databases for eight different aircraft and multivariate orthogonal functions. For each database and each coefficient, models were determined using polynomials expanded about the state and control variables, and an othgonalization procedure. A predicted squared-error criterion was used to automatically select the model terms. Modeling terms picked in at least half of the analyses, which totalled 45 terms, were retained to form the generic nonlinear aerodynamic (GNA) model. Least squares was then used to estimate the model parameters and associated uncertainty that best fit the GNA model to each database. Nonlinear flight simulations were used to demonstrate that the GNA model produces accurate trim solutions, local behavior (modal frequencies and damping ratios), and global dynamic behavior (91% accurate state histories and 80% accurate aerodynamic coefficient histories) under large-amplitude excitation. This compact aerodynamics model can be used to decrease on-board memory storage requirements, quickly change conceptual aircraft models, provide smooth analytical functions for control and optimization applications, and facilitate real-time parametric system identification.

  6. Computation of turbulent boundary layers on curved surfaces, 1 June 1975 - 31 January 1976

    NASA Technical Reports Server (NTRS)

    Wilcox, D. C.; Chambers, T. L.

    1976-01-01

    An accurate method was developed for predicting effects of streamline curvature and coordinate system rotation on turbulent boundary layers. A new two-equation model of turbulence was developed which serves as the basis of the study. In developing the new model, physical reasoning is combined with singular perturbation methods to develop a rational, physically-based set of equations which are, on the one hand, as accurate as mixing-length theory for equilibrium boundary layers and, on the other hand, suitable for computing effects of curvature and rotation. The equations are solved numerically for several boundary layer flows over plane and curved surfaces. For incompressible boundary layers, results of the computations are generally within 10% of corresponding experimental data. Somewhat larger discrepancies are noted for compressible applications.

  7. Human iPSC-derived cardiomyocytes and tissue engineering strategies for disease modeling and drug screening

    PubMed Central

    Smith, Alec S.T.; Macadangdang, Jesse; Leung, Winnie; Laflamme, Michael A.; Kim, Deok-Ho

    2016-01-01

    Improved methodologies for modeling cardiac disease phenotypes and accurately screening the efficacy and toxicity of potential therapeutic compounds are actively being sought to advance drug development and improve disease modeling capabilities. To that end, much recent effort has been devoted to the development of novel engineered biomimetic cardiac tissue platforms that accurately recapitulate the structure and function of the human myocardium. Within the field of cardiac engineering, induced pluripotent stem cells (iPSCs) are an exciting tool that offer the potential to advance the current state of the art, as they are derived from somatic cells, enabling the development of personalized medical strategies and patient specific disease models. Here we review different aspects of iPSC-based cardiac engineering technologies. We highlight methods for producing iPSC-derived cardiomyocytes (iPSC-CMs) and discuss their application to compound efficacy/toxicity screening and in vitro modeling of prevalent cardiac diseases. Special attention is paid to the application of micro- and nano-engineering techniques for the development of novel iPSC-CM based platforms and their potential to advance current preclinical screening modalities. PMID:28007615

  8. Atmospheric density models

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.

  9. On a generalized laminate theory with application to bending, vibration, and delamination buckling in composite laminates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbero, E.J.

    1989-01-01

    In this study, a computational model for accurate analysis of composite laminates and laminates with including delaminated interfaces is developed. An accurate prediction of stress distributions, including interlaminar stresses, is obtained by using the Generalized Laminate Plate Theory of Reddy in which layer-wise linear approximation of the displacements through the thickness is used. Analytical as well as finite-element solutions of the theory are developed for bending and vibrations of laminated composite plates for the linear theory. Geometrical nonlinearity, including buckling and postbuckling are included and used to perform stress analysis of laminated plates. A general two dimensional theory of laminatedmore » cylindrical shells is also developed in this study. Geometrical nonlinearity and transverse compressibility are included. Delaminations between layers of composite plates are modelled by jump discontinuity conditions at the interfaces. The theory includes multiple delaminations through the thickness. Geometric nonlinearity is included to capture layer buckling. The strain energy release rate distribution along the boundary of delaminations is computed by a novel algorithm. The computational models presented herein are accurate for global behavior and particularly appropriate for the study of local effects.« less

  10. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  11. Procedure for the systematic orientation of digitised cranial models. Design and validation.

    PubMed

    Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L

    2015-12-01

    Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Effect of species rarity on the accuracy of species distribution models for reptiles and amphibians in southern California

    USGS Publications Warehouse

    Franklin, J.; Wejnert, K.E.; Hathaway, S.A.; Rochester, C.J.; Fisher, R.N.

    2009-01-01

    Aim: Several studies have found that more accurate predictive models of species' occurrences can be developed for rarer species; however, one recent study found the relationship between range size and model performance to be an artefact of sample prevalence, that is, the proportion of presence versus absence observations in the data used to train the model. We examined the effect of model type, species rarity class, species' survey frequency, detectability and manipulated sample prevalence on the accuracy of distribution models developed for 30 reptile and amphibian species. Location: Coastal southern California, USA. Methods: Classification trees, generalized additive models and generalized linear models were developed using species presence and absence data from 420 locations. Model performance was measured using sensitivity, specificity and the area under the curve (AUC) of the receiver-operating characteristic (ROC) plot based on twofold cross-validation, or on bootstrapping. Predictors included climate, terrain, soil and vegetation variables. Species were assigned to rarity classes by experts. The data were sampled to generate subsets with varying ratios of presences and absences to test for the effect of sample prevalence. Join count statistics were used to characterize spatial dependence in the prediction errors. Results: Species in classes with higher rarity were more accurately predicted than common species, and this effect was independent of sample prevalence. Although positive spatial autocorrelation remained in the prediction errors, it was weaker than was observed in the species occurrence data. The differences in accuracy among model types were slight. Main conclusions: Using a variety of modelling methods, more accurate species distribution models were developed for rarer than for more common species. This was presumably because it is difficult to discriminate suitable from unsuitable habitat for habitat generalists, and not as an artefact of the effect of sample prevalence on model estimation. ?? 2008 The Authors.

  13. Developing and testing temperature models for regulated systems: a case study on the Upper Delaware River

    USGS Publications Warehouse

    Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.

    2014-01-01

    Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58–1.311, NSE = 0.99–0.97, d = 0.98–0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = −0.10 to −1.30). Validation analyses showed all models performed well; the HFM model was the most accurate compared other models (RMSE = 0.92, both NSE = 0.98, d = 0.99) and the ARIMA model was least accurate (RMSE = 2.06, NSE = 0.92, d = 0.98); however, all models had an overestimation bias (PBIAS = −4.1 to −10.20). Aside from the one day forecast ARIMA model (md = 0.53), all models forecasted fairly well at the one, three, and five day forecasts (md = 0.77–0.96). Overall, we were successful in developing models predicting daily mean temperature across a broad range of temperatures. These models, specifically the GLScos, ANN, and HFM, may serve as important tools for predicting conditions and managing thermal releases in regulated river systems such as the Delaware River. Further model development may be important in customizing predictions for particular biological or ecological needs, or for particular temporal or spatial scales.

  14. Developing and testing temperature models for regulated systems: A case study on the Upper Delaware River

    NASA Astrophysics Data System (ADS)

    Cole, Jeffrey C.; Maloney, Kelly O.; Schmid, Matthias; McKenna, James E.

    2014-11-01

    Water temperature is an important driver of many processes in riverine ecosystems. If reservoirs are present, their releases can greatly influence downstream water temperatures. Models are important tools in understanding the influence these releases may have on the thermal regimes of downstream rivers. In this study, we developed and tested a suite of models to predict river temperature at a location downstream of two reservoirs in the Upper Delaware River (USA), a section of river that is managed to support a world-class coldwater fishery. Three empirical models were tested, including a Generalized Least Squares Model with a cosine trend (GLScos), AutoRegressive Integrated Moving Average (ARIMA), and Artificial Neural Network (ANN). We also tested one mechanistic Heat Flux Model (HFM) that was based on energy gain and loss. Predictor variables used in model development included climate data (e.g., solar radiation, wind speed, etc.) collected from a nearby weather station and temperature and hydrologic data from upstream U.S. Geological Survey gages. Models were developed with a training dataset that consisted of data from 2008 to 2011; they were then independently validated with a test dataset from 2012. Model accuracy was evaluated using root mean square error (RMSE), Nash Sutcliffe efficiency (NSE), percent bias (PBIAS), and index of agreement (d) statistics. Model forecast success was evaluated using baseline-modified prime index of agreement (md) at the one, three, and five day predictions. All five models accurately predicted daily mean river temperature across the entire training dataset (RMSE = 0.58-1.311, NSE = 0.99-0.97, d = 0.98-0.99); ARIMA was most accurate (RMSE = 0.57, NSE = 0.99), but each model, other than ARIMA, showed short periods of under- or over-predicting observed warmer temperatures. For the training dataset, all models besides ARIMA had overestimation bias (PBIAS = -0.10 to -1.30). Validation analyses showed all models performed well; the HFM model was the most accurate compared other models (RMSE = 0.92, both NSE = 0.98, d = 0.99) and the ARIMA model was least accurate (RMSE = 2.06, NSE = 0.92, d = 0.98); however, all models had an overestimation bias (PBIAS = -4.1 to -10.20). Aside from the one day forecast ARIMA model (md = 0.53), all models forecasted fairly well at the one, three, and five day forecasts (md = 0.77-0.96). Overall, we were successful in developing models predicting daily mean temperature across a broad range of temperatures. These models, specifically the GLScos, ANN, and HFM, may serve as important tools for predicting conditions and managing thermal releases in regulated river systems such as the Delaware River. Further model development may be important in customizing predictions for particular biological or ecological needs, or for particular temporal or spatial scales.

  15. Parametric model of human body shape and ligaments for patient-specific epidural simulation.

    PubMed

    Vaughan, Neil; Dubey, Venketesh N; Wee, Michael Y K; Isaacs, Richard

    2014-10-01

    This work is to build upon the concept of matching a person's weight, height and age to their overall body shape to create an adjustable three-dimensional model. A versatile and accurate predictor of body size and shape and ligament thickness is required to improve simulation for medical procedures. A model which is adjustable for any size, shape, body mass, age or height would provide ability to simulate procedures on patients of various body compositions. Three methods are provided for estimating body circumferences and ligament thicknesses for each patient. The first method is using empirical relations from body shape and size. The second method is to load a dataset from a magnetic resonance imaging (MRI) scan or ultrasound scan containing accurate ligament measurements. The third method is a developed artificial neural network (ANN) which uses MRI dataset as a training set and improves accuracy using error back-propagation, which learns to increase accuracy as more patient data is added. The ANN is trained and tested with clinical data from 23,088 patients. The ANN can predict subscapular skinfold thickness within 3.54 mm, waist circumference 3.92 cm, thigh circumference 2.00 cm, arm circumference 1.21 cm, calf circumference 1.40 cm, triceps skinfold thickness 3.43 mm. Alternative regression analysis method gave overall slightly less accurate predictions for subscapular skinfold thickness within 3.75 mm, waist circumference 3.84 cm, thigh circumference 2.16 cm, arm circumference 1.34 cm, calf circumference 1.46 cm, triceps skinfold thickness 3.89 mm. These calculations are used to display a 3D graphics model of the patient's body shape using OpenGL and adjusted by 3D mesh deformations. A patient-specific epidural simulator is presented using the developed body shape model, able to simulate needle insertion procedures on a 3D model of any patient size and shape. The developed ANN gave the most accurate results for body shape, size and ligament thickness. The resulting simulator offers the experience of simulating needle insertions accurately whilst allowing for variation in patient body mass, height or age. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Hydrologic modeling for water resource assessment in a developing country: the Rwanda case study

    Treesearch

    Steve McNulty; Erika Cohen Mack; Ge Sun; Peter Caldwell

    2016-01-01

    Accurate water resources assessment using hydrologic models can be a challenge anywhere, but particularly for developing countries with limited financial and technical resources. Developing countries could most benefit from the water resource planning capabilities that hydrologic models can provide, but these countries are least likely to have the data needed to run ...

  17. The New York Sepsis Severity Score: Development of a Risk-Adjusted Severity Model for Sepsis.

    PubMed

    Phillips, Gary S; Osborn, Tiffany M; Terry, Kathleen M; Gesten, Foster; Levy, Mitchell M; Lemeshow, Stanley

    2018-05-01

    In accordance with Rory's Regulations, hospitals across New York State developed and implemented protocols for sepsis recognition and treatment to reduce variations in evidence informed care and preventable mortality. The New York Department of Health sought to develop a risk assessment model for accurate and standardized hospital mortality comparisons of adult septic patients across institutions using case-mix adjustment. Retrospective evaluation of prospectively collected data. Data from 43,204 severe sepsis and septic shock patients from 179 hospitals across New York State were evaluated. Prospective data were submitted to a database from January 1, 2015, to December 31, 2015. None. Maximum likelihood logistic regression was used to estimate model coefficients used in the New York State risk model. The mortality probability was estimated using a logistic regression model. Variables to be included in the model were determined as part of the model-building process. Interactions between variables were included if they made clinical sense and if their p values were less than 0.05. Model development used a random sample of 90% of available patients and was validated using the remaining 10%. Hosmer-Lemeshow goodness of fit p values were considerably greater than 0.05, suggesting good calibration. Areas under the receiver operator curve in the developmental and validation subsets were 0.770 (95% CI, 0.765-0.775) and 0.773 (95% CI, 0.758-0.787), respectively, indicating good discrimination. Development and validation datasets had similar distributions of estimated mortality probabilities. Mortality increased with rising age, comorbidities, and lactate. The New York Sepsis Severity Score accurately estimated the probability of hospital mortality in severe sepsis and septic shock patients. It performed well with respect to calibration and discrimination. This sepsis-specific model provides an accurate, comprehensive method for standardized mortality comparison of adult patients with severe sepsis and septic shock.

  18. Airframe Icing Research Gaps: NASA Perspective

    NASA Technical Reports Server (NTRS)

    Potapczuk, Mark

    2009-01-01

    qCurrent Airframe Icing Technology Gaps: Development of a full 3D ice accretion simulation model. Development of an improved simulation model for SLD conditions. CFD modeling of stall behavior for ice-contaminated wings/tails. Computational methods for simulation of stability and control parameters. Analysis of thermal ice protection system performance. Quantification of 3D ice shape geometric characteristics Development of accurate ground-based simulation of SLD conditions. Development of scaling methods for SLD conditions. Development of advanced diagnostic techniques for assessment of tunnel cloud conditions. Identification of critical ice shapes for aerodynamic performance degradation. Aerodynamic scaling issues associated with testing scale model ice shape geometries. Development of altitude scaling methods for thermal ice protections systems. Development of accurate parameter identification methods. Measurement of stability and control parameters for an ice-contaminated swept wing aircraft. Creation of control law modifications to prevent loss of control during icing encounters. 3D ice shape geometries. Collection efficiency data for ice shape geometries. SLD ice shape data, in-flight and ground-based, for simulation verification. Aerodynamic performance data for 3D geometries and various icing conditions. Stability and control parameter data for iced aircraft configurations. Thermal ice protection system data for simulation validation.

  19. Modeling haplotype block variation using Markov chains.

    PubMed

    Greenspan, G; Geiger, D

    2006-04-01

    Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity.

  20. Modeling Haplotype Block Variation Using Markov Chains

    PubMed Central

    Greenspan, G.; Geiger, D.

    2006-01-01

    Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity. PMID:16361244

  1. CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang

    2014-06-01

    Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.

  2. Predicting perturbation patterns from the topology of biological networks.

    PubMed

    Santolini, Marc; Barabási, Albert-László

    2018-06-20

    High-throughput technologies, offering an unprecedented wealth of quantitative data underlying the makeup of living systems, are changing biology. Notably, the systematic mapping of the relationships between biochemical entities has fueled the rapid development of network biology, offering a suitable framework to describe disease phenotypes and predict potential drug targets. However, our ability to develop accurate dynamical models remains limited, due in part to the limited knowledge of the kinetic parameters underlying these interactions. Here, we explore the degree to which we can make reasonably accurate predictions in the absence of the kinetic parameters. We find that simple dynamically agnostic models are sufficient to recover the strength and sign of the biochemical perturbation patterns observed in 87 biological models for which the underlying kinetics are known. Surprisingly, a simple distance-based model achieves 65% accuracy. We show that this predictive power is robust to topological and kinetic parameter perturbations, and we identify key network properties that can increase up to 80% the recovery rate of the true perturbation patterns. We validate our approach using experimental data on the chemotactic pathway in bacteria, finding that a network model of perturbation spreading predicts with ∼80% accuracy the directionality of gene expression and phenotype changes in knock-out and overproduction experiments. These findings show that the steady advances in mapping out the topology of biochemical interaction networks opens avenues for accurate perturbation spread modeling, with direct implications for medicine and drug development.

  3. A bio-optical model for integration into ecosystem models for the Ligurian Sea

    NASA Astrophysics Data System (ADS)

    Bengil, Fethi; McKee, David; Beşiktepe, Sükrü T.; Sanjuan Calzado, Violeta; Trees, Charles

    2016-12-01

    A bio-optical model has been developed for the Ligurian Sea which encompasses both deep, oceanic Case 1 waters and shallow, coastal Case 2 waters. The model builds on earlier Case 1 models for the region and uses field data collected on the BP09 research cruise to establish new relationships for non-biogenic particles and CDOM. The bio-optical model reproduces in situ IOPs accurately and is used to parameterize radiative transfer simulations which demonstrate its utility for modeling underwater light levels and above surface remote sensing reflectance. Prediction of euphotic depth is found to be accurate to within ∼3.2 m (RMSE). Previously published light field models work well for deep oceanic parts of the Ligurian Sea that fit the Case 1 classification. However, they are found to significantly over-estimate euphotic depth in optically complex coastal waters where the influence of non-biogenic materials is strongest. For these coastal waters, the combination of the bio-optical model proposed here and full radiative transfer simulations provides significantly more accurate predictions of euphotic depth.

  4. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  5. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  6. Predicting intensity ranks of peptide fragment ions.

    PubMed

    Frank, Ari M

    2009-05-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.

  7. Predicting Intensity Ranks of Peptide Fragment Ions

    PubMed Central

    Frank, Ari M.

    2009-01-01

    Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476

  8. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  9. Comparison of Cluster, Slab, and Analytic Potential Models for the Dimethyl Methylphosphonate (DMMP)/TiO2 (110) Intermolecular Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Li; Tunega, Daniel; Xu, Lai

    2013-08-29

    In a previous study (J. Phys. Chem. C 2011, 115, 12403) cluster models for the TiO2 rutile (110) surface and MP2 calculations were used to develop an analytic potential energy function for dimethyl methylphosphonate (DMMP) interacting with this surface. In the work presented here, this analytic potential and MP2 cluster models are compared with DFT "slab" calculations for DMMP interacting with the TiO2 (110) surface and with DFT cluster models for the TiO2 (110) surface. The DFT slab calculations were performed with the PW91 and PBE functionals. The analytic potential gives DMMP/ TiO2 (110) potential energy curves in excellent agreementmore » with those obtained from the slab calculations. The cluster models for the TiO2 (110) surface, used for the MP2 calculations, were extended to DFT calculations with the B3LYP, PW91, and PBE functional. These DFT calculations do not give DMMP/TiO2 (110) interaction energies which agree with those from the DFT slab calculations. Analyses of the wave functions for these cluster models show that they do not accurately represent the HOMO and LUMO for the surface, which should be 2p and 3d orbitals, respectively, and the models also do not give an accurate band gap. The MP2 cluster models do not accurately represent the LUMO and that they give accurate DMMP/TiO2 (110) interaction energies is apparently fortuitous, arising from their highly inaccurate band gaps. Accurate cluster models, consisting of 7, 10, and 15 Ti-atoms and which have the correct HOMO and LUMO properties, are proposed. The work presented here illustrates the care that must be taken in "constructing" cluster models which accurately model surfaces.« less

  10. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  11. Accurate force field for molybdenum by machine learning large materials data

    NASA Astrophysics Data System (ADS)

    Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping

    2017-09-01

    In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.

  12. A photogrammetric technique for generation of an accurate multispectral optical flow dataset

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2017-06-01

    A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.

  13. Mobility Research for Future Vehicles: A Methodology to Create a Unified Trade-Off Environment for Advanced Aerospace Vehicle

    DTIC Science & Technology

    2015-10-30

    accurately follow the development of the Black Hawk helicopters , a single main rotor model in NDARC that accurately represented the UH-60A is required. NDARC...Weight changes were based on results from Nixon’s paper, which focused on modeling the structure of a composite rotor blade and using optimization to...conclude that improved composite design to further reduce weight needs to be achieved. An additionally interesting effect is how the rotor technology

  14. The Development of a New Model of Solar EUV Irradiance Variability

    NASA Technical Reports Server (NTRS)

    Warren, Harry; Wagner, William J. (Technical Monitor)

    2002-01-01

    The goal of this research project is the development of a new model of solar EUV (Extreme Ultraviolet) irradiance variability. The model is based on combining differential emission measure distributions derived from spatially and spectrally resolved observations of active regions, coronal holes, and the quiet Sun with full-disk solar images. An initial version of this model was developed with earlier funding from NASA. The new version of the model developed with this research grant will incorporate observations from SoHO as well as updated compilations of atomic data. These improvements will make the model calculations much more accurate.

  15. Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors

    PubMed Central

    Marko, Matthew David; Shevach, Glenn

    2017-01-01

    A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions. PMID:28076418

  16. Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors.

    PubMed

    Marko, Matthew David; Shevach, Glenn

    2017-01-01

    A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions.

  17. DEVELOPING A BETTER UNDERSTANDING OF REAL-WORLD AUTOMOBILE EMISSIONS

    EPA Science Inventory

    Emission inventories are needed by EPA for air dispersion modeling, regional strategy development, regulation setting, air toxics risk assessment, and trend tracking. Therefore, it is extremely important that inventories be accurate and be updated frequently. The development an...

  18. Mars Exploration Rover Terminal Descent Mission Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Behzad; Queen, Eric M.

    2004-01-01

    Because of NASA's added reliance on simulation for successful interplanetary missions, the MER mission has developed a detailed EDL trajectory modeling and simulation. This paper summarizes how the MER EDL sequence of events are modeled, verification of the methods used, and the inputs. This simulation is built upon a multibody parachute trajectory simulation tool that has been developed in POST I1 that accurately simulates the trajectory of multiple vehicles in flight with interacting forces. In this model the parachute and the suspended bodies are treated as 6 Degree-of-Freedom (6 DOF) bodies. The terminal descent phase of the mission consists of several Entry, Descent, Landing (EDL) events, such as parachute deployment, heatshield separation, deployment of the lander from the backshell, deployment of the airbags, RAD firings, TIRS firings, etc. For an accurate, reliable simulation these events need to be modeled seamlessly and robustly so that the simulations will remain numerically stable during Monte-Carlo simulations. This paper also summarizes how the events have been modeled, the numerical issues, and modeling challenges.

  19. Three-dimensional conceptual model for the Hanford Site unconfined aquifer system: FY 1994 status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorne, P.D.; Chamness, M.A.; Vermeul, V.R.

    This report documents work conducted during the fiscal year 1994 to development an improved three-dimensional conceptual model of ground-water flow in the unconfined aquifer system across the Hanford Site Ground-Water Surveillance Project, which is managed by Pacific Northwest Laboratory. The main objective of the ongoing effort to develop an improved conceptual model of ground-water flow is to provide the basis for improved numerical report models that will be capable of accurately predicting the movement of radioactive and chemical contaminant plumes in the aquifer beneath Hanford. More accurate ground-water flow models will also be useful in assessing the impacts of changesmore » in facilities and operations. For example, decreasing volumes of operational waste-water discharge are resulting in a declining water table in parts of the unconfined aquifer. In addition to supporting numerical modeling, the conceptual model also provides a qualitative understanding of the movement of ground water and contaminants in the aquifer.« less

  20. Development of a detector model for generation of synthetic radiographs of cargo containers

    NASA Astrophysics Data System (ADS)

    White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.

    2008-05-01

    Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.

  1. Using structure to explore the sequence alignment space of remote homologs.

    PubMed

    Kuziemko, Andrew; Honig, Barry; Petrey, Donald

    2011-10-01

    Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.

  2. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    DOE PAGES

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less

  3. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made in the analysis were addressed and fully investigated for their accuracy by using the three-dimensional electromagnetic simulation code MAFIA (Solution of Maxwell's Equations by the Finite Integration Algorithm) (refs. 3 and 4). We found that several approximations introduced significant error (ref. 5).

  4. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  5. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  6. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  7. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  8. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  9. A Simple Iterative Model Accurately Captures Complex Trapline Formation by Bumblebees Across Spatial Scales and Flower Arrangements

    PubMed Central

    Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars

    2013-01-01

    Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353

  10. SAMPLING PROTOCOLS TO SUPPORT DEVELOPMENT OF CONCEPTUAL SITE MODELS AND CLEANUP DECISIONS FOR CONTAMINANTS IN GROUND WATER

    EPA Science Inventory

    The ability to make reliable decisions about the extent of subsurface contamination and approaches to restoration of contaminated ground water is dependent on the development of an accurate conceptual site model (CSM). The accuracy of the CSM is dependent on the quality of site ...

  11. Longitudinal Models of Reading Achievement of Students with Learning Disabilities and without Disabilities

    ERIC Educational Resources Information Center

    Sullivan, Amanda L.; Kohli, Nidhi; Farnsworth, Elyse M.; Sadeh, Shanna; Jones, Leila

    2017-01-01

    Objective: Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. Method: We drew an analytic sample of 1,990 students from…

  12. Simulink based behavioural modelling of a pulse oximeter for deployment in rapid development, prototyping and verification.

    PubMed

    Shokouhian, M; Morling, R C S; Kale, I

    2012-01-01

    The pulse oximeter is a well-known device for measuring the level of oxygen in blood. Since their invention, pulse oximeters have been under constant development in both aspects of hardware and software; however there are still unsolved problems that limit their performance [6], [7]. Many fresh algorithms and new design techniques are being suggested every year by industry and academic researchers which claim that they can improve accuracy of measurements [8], [9]. With the lack of an accurate computer-based behavioural model for pulse oximeters, the only way for evaluation of these newly developed systems and algorithms is through hardware implementation which can be both expensive and time consuming. This paper presents an accurate Simulink based behavioural model for a pulse oximeter that can be used by industry and academia alike working in this area, as an exploration as well as productivity enhancement tool during their research and development process. The aim of this paper is to introduce a new computer-based behavioural model which provides a simulation environment from which new ideas can be rapidly evaluated long before the real implementation.

  13. Development of mapped stress-field boundary conditions based on a Hill-type muscle model.

    PubMed

    Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A

    2014-09-01

    Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Experimental Evaluation of Acoustic Engine Liner Models Developed with COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Jones, Michael G.; Bertolucci, Brandon

    2017-01-01

    Accurate modeling tools are needed to design new engine liners capable of reducing aircraft noise. The purpose of this study is to determine if a commercially-available finite element package, COMSOL Multiphysics, can be used to accurately model a range of different acoustic engine liner designs, and in the process, collect and document a benchmark dataset that can be used in both current and future code evaluation activities. To achieve these goals, a variety of liner samples, ranging from conventional perforate-over-honeycomb to extended-reaction designs, were installed in one wall of the grazing flow impedance tube at the NASA Langley Research Center. The liners were exposed to high sound pressure levels and grazing flow, and the effect of the liner on the sound field in the flow duct was measured. These measurements were then compared with predictions. While this report only includes comparisons for a subset of the configurations, the full database of all measurements and predictions is available in electronic format upon request. The results demonstrate that both conventional perforate-over-honeycomb and extended-reaction liners can be accurately modeled using COMSOL. Therefore, this modeling tool can be used with confidence to supplement the current suite of acoustic propagation codes, and ultimately develop new acoustic engine liners designed to reduce aircraft noise.

  15. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  16. Estimating terrestrial snow depth with the Topex-Poseidon altimeter and radiometer

    USGS Publications Warehouse

    Papa, F.; Legresy, B.; Mognard, N.M.; Josberger, E.G.; Remy, F.

    2002-01-01

    Active and passive microwave measurements obtained by the dual-frequency Topex-Poseidon radar altimeter from the Northern Great Plains of the United States are used to develop a snow pack radar backscatter model. The model results are compared with daily time series of surface snow observations made by the U.S. National Weather Service. The model results show that Ku-band provides more accurate snow depth determinations than does C-band. Comparing the snow depth determinations derived from the Topex-Poseidon nadir-looking passive microwave radiometers with the oblique-looking Satellite Sensor Microwave Imager (SSM/I) passive microwave observations and surface observations shows that both instruments accurately portray the temporal characteristics of the snow depth time series. While both retrievals consistently underestimate the actual snow depths, the Topex-Poseidon results are more accurate.

  17. Prediction of retention times in comprehensive two-dimensional gas chromatography using thermodynamic models.

    PubMed

    McGinitie, Teague M; Harynuk, James J

    2012-09-14

    A method was developed to accurately predict both the primary and secondary retention times for a series of alkanes, ketones and alcohols in a flow-modulated GC×GC system. This was accomplished through the use of a three-parameter thermodynamic model where ΔH, ΔS, and ΔC(p) for an analyte's interaction with the stationary phases in both dimensions are known. Coupling this thermodynamic model with a time summation calculation it was possible to accurately predict both (1)t(r) and (2)t(r) for all analytes. The model was able to predict retention times regardless of the temperature ramp used, with an average error of only 0.64% for (1)t(r) and an average error of only 2.22% for (2)t(r). The model shows promise for the accurate prediction of retention times in GC×GC for a wide range of compounds and is able to utilize data collected from 1D experiments. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.

    PubMed

    Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi

    2017-08-08

    Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.

  19. Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction

    PubMed Central

    Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi

    2017-01-01

    Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957

  20. Rapid prediction of particulate, humus and resistant fractions of soil organic carbon in reforested lands using infrared spectroscopy.

    PubMed

    Madhavan, Dinesh B; Baldock, Jeff A; Read, Zoe J; Murphy, Simon C; Cunningham, Shaun C; Perring, Michael P; Herrmann, Tim; Lewis, Tom; Cavagnaro, Timothy R; England, Jacqueline R; Paul, Keryn I; Weston, Christopher J; Baker, Thomas G

    2017-05-15

    Reforestation of agricultural lands with mixed-species environmental plantings can effectively sequester C. While accurate and efficient methods for predicting soil organic C content and composition have recently been developed for soils under agricultural land uses, such methods under forested land uses are currently lacking. This study aimed to develop a method using infrared spectroscopy for accurately predicting total organic C (TOC) and its fractions (particulate, POC; humus, HOC; and resistant, ROC organic C) in soils under environmental plantings. Soils were collected from 117 paired agricultural-reforestation sites across Australia. TOC fractions were determined in a subset of 38 reforested soils using physical fractionation by automated wet-sieving and 13 C nuclear magnetic resonance (NMR) spectroscopy. Mid- and near-infrared spectra (MNIRS, 6000-450 cm -1 ) were acquired from finely-ground soils from environmental plantings and agricultural land. Satisfactory prediction models based on MNIRS and partial least squares regression (PLSR) were developed for TOC and its fractions. Leave-one-out cross-validations of MNIRS-PLSR models indicated accurate predictions (R 2  > 0.90, negligible bias, ratio of performance to deviation > 3) and fraction-specific functional group contributions to beta coefficients in the models. TOC and its fractions were predicted using the cross-validated models and soil spectra for 3109 reforested and agricultural soils. The reliability of predictions determined using k-nearest neighbour score distance indicated that >80% of predictions were within the satisfactory inlier limit. The study demonstrated the utility of infrared spectroscopy (MNIRS-PLSR) to rapidly and economically determine TOC and its fractions and thereby accurately describe the effects of land use change such as reforestation on agricultural soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Overview of aerothermodynamic loads definition study

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    1991-01-01

    The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.

  2. Characterization of photomultiplier tubes with a realistic model through GPU-boosted simulation

    NASA Astrophysics Data System (ADS)

    Anthony, M.; Aprile, E.; Grandi, L.; Lin, Q.; Saldanha, R.

    2018-02-01

    The accurate characterization of a photomultiplier tube (PMT) is crucial in a wide-variety of applications. However, current methods do not give fully accurate representations of the response of a PMT, especially at very low light levels. In this work, we present a new and more realistic model of the response of a PMT, called the cascade model, and use it to characterize two different PMTs at various voltages and light levels. The cascade model is shown to outperform the more common Gaussian model in almost all circumstances and to agree well with a newly introduced model independent approach. The technical and computational challenges of this model are also presented along with the employed solution of developing a robust GPU-based analysis framework for this and other non-analytical models.

  3. Developing a dengue forecast model using machine learning: A case study in China

    PubMed Central

    Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-01-01

    Background In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Methodology/Principal findings Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011–2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. Conclusion and significance The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics. PMID:29036169

  4. A comparison of major petroleum life cycle models

    EPA Science Inventory

    Many organizations have attempted to develop an accurate well-to-pump life cycle model of petroleum products in order to inform decision makers of the consequences of its use. Our paper studies five of these models, demonstrating the differences in their predictions and attemptin...

  5. Reactive decontamination of absorbing thin film polymer coatings: model development and parameter determination

    NASA Astrophysics Data System (ADS)

    Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew

    2014-03-01

    A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.

  6. A novel fully-humanised 3D skin equivalent to model early melanoma invasion

    PubMed Central

    Hill, David S; Robinson, Neil D P; Caley, Matthew P; Chen, Mei; O’Toole, Edel A; Armstrong, Jane L; Przyborski, Stefan; Lovat, Penny E

    2015-01-01

    Metastatic melanoma remains incurable, emphasising the acute need for improved research models to investigate the underlying biological mechanisms mediating tumour invasion and metastasis, and to develop more effective targeted therapies to improve clinical outcome. Available animal models of melanoma do not accurately reflect human disease and current in vitro human skin equivalent models incorporating melanoma cells are not fully representative of the human skin microenvironment. We have developed a robust and reproducible, fully-humanised 3D skin equivalent comprising a stratified, terminally differentiated epidermis and a dermal compartment consisting of fibroblast-generated extracellular matrix. Melanoma cells incorporated into the epidermis were able to invade through the basement membrane and into the dermis, mirroring early tumour invasion in vivo. Comparison of our novel 3D melanoma skin equivalent with melanoma in situ and metastatic melanoma indicates this model accurately recreates features of disease pathology, making it a physiologically representative model of early radial and vertical growth phase melanoma invasion. PMID:26330548

  7. Federal Highway Administration (FHWA) work zone driver model software

    DOT National Transportation Integrated Search

    2016-11-01

    FHWA and the U.S. Department of Transportation (USDOT) Volpe Center are developing a work zone car-following model and simulation software that interfaces with existing microsimulation tools, enabling more accurate simulation of car-following through...

  8. WILDFIRE EMISSION MODELING: INTEGRATING BLUESKY AND SMOKE

    EPA Science Inventory

    Atmospheric chemical transport models are used to simulate historic meteorological episodes for developing air quality management strategies. Wildland fire emissions need to be characterized accurately to achieve these air quality management goals. The temporal and spatial esti...

  9. Rotary Engine Friction Test Rig Development Report

    DTIC Science & Technology

    2011-12-01

    fundamental research is needed to understand the friction characteristics of the rotary engine that lead to accelerated wear and tear on the seals...that includes a turbocharger . Once the original GT-Suite model is validated, the turbocharger model will be more accurate. This validation will...prepare for turbocharger and fuel-injector testing, which will lead to further development and calibration of the model. Further details are beyond the

  10. An adaptive state of charge estimation approach for lithium-ion series-connected battery system

    NASA Astrophysics Data System (ADS)

    Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael

    2018-07-01

    Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.

  11. [Chest modelling and automotive accidents].

    PubMed

    Trosseille, Xavier

    2011-11-01

    Automobile development is increasingly based on mathematical modeling. Accurate models of the human body are now available and serve to develop new means of protection. These models used to consist of rigid, articulated bodies but are now made of several million finite elements. They are now capable of predicting some risks of injury. To develop these models, sophisticated tests were conducted on human cadavers. For example, chest modeling started with material characterization and led to complete validation in the automobile environment. Model personalization, based on medical imaging, will permit studies of the behavior and tolerances of the entire population.

  12. Human iPSC-derived cardiomyocytes and tissue engineering strategies for disease modeling and drug screening.

    PubMed

    Smith, Alec S T; Macadangdang, Jesse; Leung, Winnie; Laflamme, Michael A; Kim, Deok-Ho

    Improved methodologies for modeling cardiac disease phenotypes and accurately screening the efficacy and toxicity of potential therapeutic compounds are actively being sought to advance drug development and improve disease modeling capabilities. To that end, much recent effort has been devoted to the development of novel engineered biomimetic cardiac tissue platforms that accurately recapitulate the structure and function of the human myocardium. Within the field of cardiac engineering, induced pluripotent stem cells (iPSCs) are an exciting tool that offer the potential to advance the current state of the art, as they are derived from somatic cells, enabling the development of personalized medical strategies and patient specific disease models. Here we review different aspects of iPSC-based cardiac engineering technologies. We highlight methods for producing iPSC-derived cardiomyocytes (iPSC-CMs) and discuss their application to compound efficacy/toxicity screening and in vitro modeling of prevalent cardiac diseases. Special attention is paid to the application of micro- and nano-engineering techniques for the development of novel iPSC-CM based platforms and their potential to advance current preclinical screening modalities. Published by Elsevier Inc.

  13. Assessing patient risk of central line-associated bacteremia via machine learning.

    PubMed

    Beeler, Cole; Dbeibo, Lana; Kelley, Kristen; Thatcher, Levi; Webb, Douglas; Bah, Amadou; Monahan, Patrick; Fowler, Nicole R; Nicol, Spencer; Judy-Malcolm, Alisa; Azar, Jose

    2018-04-13

    Central line-associated bloodstream infections (CLABSIs) contribute to increased morbidity, length of hospital stay, and cost. Despite progress in understanding the risk factors, there remains a need to accurately predict the risk of CLABSIs and, in real time, prevent them from occurring. A predictive model was developed using retrospective data from a large academic healthcare system. Models were developed with machine learning via construction of random forests using validated input variables. Fifteen variables accounted for the most significant effect on CLABSI prediction based on a retrospective study of 70,218 unique patient encounters between January 1, 2013, and May 31, 2016. The area under the receiver operating characteristic curve for the best-performing model was 0.82 in production. This model has multiple applications for resource allocation for CLABSI prevention, including serving as a tool to target patients at highest risk for potentially cost-effective but otherwise time-limited interventions. Machine learning can be used to develop accurate models to predict the risk of CLABSI in real time prior to the development of infection. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  14. A New Model for Temperature Jump at a Fluid-Solid Interface

    PubMed Central

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-01-01

    The problem presented involves the development of a new analytical model for the general fluid-solid temperature jump. To the best of our knowledge, there are no analytical models that provide the accurate predictions of the temperature jump for both gas and liquid systems. In this paper, a unified model for the fluid-solid temperature jump has been developed based on our adsorption model of the interfacial interactions. Results obtained from this model are validated with available results from the literature. PMID:27764230

  15. Vector fields in a tight laser focus: comparison of models.

    PubMed

    Peatross, Justin; Berrondo, Manuel; Smith, Dallas; Ware, Michael

    2017-06-26

    We assess several widely used vector models of a Gaussian laser beam in the context of more accurate vector diffraction integration. For the analysis, we present a streamlined derivation of the vector fields of a uniformly polarized beam reflected from an ideal parabolic mirror, both inside and outside of the resulting focus. This exact solution to Maxwell's equations, first developed in 1920 by V. S. Ignatovsky, is highly relevant to high-intensity laser experiments since the boundary conditions at a focusing optic dictate the form of the focus in a manner analogous to a physical experiment. In contrast, many models simply assume a field profile near the focus and develop the surrounding vector fields consistent with Maxwell's equations. In comparing the Ignatovsky result with popular closed-form analytic vector models of a Gaussian beam, we find that the relatively simple model developed by Erikson and Singh in 1994 provides good agreement in the paraxial limit. Models involving a Lax expansion introduce a divergences outside of the focus while providing little if any improvement in the focal region. Extremely tight focusing produces a somewhat complicated structure in the focus, and requires the Ignatovsky model for accurate representation.

  16. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  17. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2010-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  18. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy.

    PubMed

    Payne, Courtney E; Wolfrum, Edward J

    2015-01-01

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.

  19. Gene expression models for prediction of longitudinal dispersion coefficient in streams

    NASA Astrophysics Data System (ADS)

    Sattar, Ahmed M. A.; Gharabaghi, Bahram

    2015-05-01

    Longitudinal dispersion is the key hydrologic process that governs transport of pollutants in natural streams. It is critical for spill action centers to be able to predict the pollutant travel time and break-through curves accurately following accidental spills in urban streams. This study presents a novel gene expression model for longitudinal dispersion developed using 150 published data sets of geometric and hydraulic parameters in natural streams in the United States, Canada, Europe, and New Zealand. The training and testing of the model were accomplished using randomly-selected 67% (100 data sets) and 33% (50 data sets) of the data sets, respectively. Gene expression programming (GEP) is used to develop empirical relations between the longitudinal dispersion coefficient and various control variables, including the Froude number which reflects the effect of reach slope, aspect ratio, and the bed material roughness on the dispersion coefficient. Two GEP models have been developed, and the prediction uncertainties of the developed GEP models are quantified and compared with those of existing models, showing improved prediction accuracy in favor of GEP models. Finally, a parametric analysis is performed for further verification of the developed GEP models. The main reason for the higher accuracy of the GEP models compared to the existing regression models is that exponents of the key variables (aspect ratio and bed material roughness) are not constants but a function of the Froude number. The proposed relations are both simple and accurate and can be effectively used to predict the longitudinal dispersion coefficients in natural streams.

  20. Initial Everglades Depth Estimation Network (EDEN) Digital Elevation Model Research and Development

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    Introduction The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). To produce historic and near-real time maps of water depths, the EDEN requires a system-wide digital elevation model (DEM) of the ground surface. Accurate Everglades wetland ground surface elevation data were non-existent before the U.S. Geological Survey (USGS) undertook the collection of highly accurate surface elevations at the regional scale. These form the foundation for EDEN DEM development. This development process is iterative as additional high accuracy elevation data (HAED) are collected, water surfacing algorithms improve, and additional ground-based ancillary data become available. Models are tested using withheld HAED and independently measured water depth data, and by using DEM data in EDEN adaptive management applications. Here the collection of HAED is briefly described before the approach to DEM development and the current EDEN DEM are detailed. Finally future research directions for continued model development, testing, and refinement are provided.

  1. Assessment of applications of transport models on regional scale solute transport

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.

    2017-12-01

    Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.

  2. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  3. Development of a One-Equation Eddy Viscosity Turbulence Model for Application to Complex Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Wray, Timothy J.

    Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.

  4. A Simple and Accurate Rate-Driven Infiltration Model

    NASA Astrophysics Data System (ADS)

    Cui, G.; Zhu, J.

    2017-12-01

    In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.

  5. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  6. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Treesearch

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  7. Automated watershed subdivision for simulations using multi-objective optimization

    USDA-ARS?s Scientific Manuscript database

    The development of watershed management plans to evaluate placement of conservation practices typically involves application of watershed models. Incorporating spatially variable watershed characteristics into a model often requires subdividing the watershed into small areas to accurately account f...

  8. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  9. Advancing Understanding of Emissions from Oil and Natural Gas Production Operations to Support EPA’s Air Quality Modeling of Ozone Non-Attainment Areas; Final Summary Report

    EPA Science Inventory

    Executive Summary Environmentally responsible development of oil and gas assets requires well-developed emissions inventories and measurement techniques to verify emissions and the effectiveness of control strategies. To accurately model the oil and gas sector impacts on air qual...

  10. Studies of Trace Gas Chemical Cycles Using Observations, Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    2001-01-01

    For interpreting observational data, and in particular for use in inverse methods, accurate and realistic chemical transport models are essential. Toward this end we have, in recent years, helped develop and utilize a number of three-dimensional models including the Model for Atmospheric Transport and Chemistry (MATCH).

  11. Stage-structured matrix models for organisms with non-geometric development times

    Treesearch

    Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin

    2009-01-01

    Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...

  12. New optical and radio frequency angular tropospheric refraction models for deep space applications

    NASA Technical Reports Server (NTRS)

    Berman, A. L.; Rockwell, S. T.

    1976-01-01

    The development of angular tropospheric refraction models for optical and radio frequency usage is presented. The models are compact analytic functions, finite over the entire domain of elevation angle, and accurate over large ranges of pressure, temperature, and relative humidity. Additionally, FORTRAN subroutines for each of the models are included.

  13. The Abdominal Aortic Aneurysm Statistically Corrected Operative Risk Evaluation (AAA SCORE) for predicting mortality after open and endovascular interventions.

    PubMed

    Ambler, Graeme K; Gohel, Manjit S; Mitchell, David C; Loftus, Ian M; Boyle, Jonathan R

    2015-01-01

    Accurate adjustment of surgical outcome data for risk is vital in an era of surgeon-level reporting. Current risk prediction models for abdominal aortic aneurysm (AAA) repair are suboptimal. We aimed to develop a reliable risk model for in-hospital mortality after intervention for AAA, using rigorous contemporary statistical techniques to handle missing data. Using data collected during a 15-month period in the United Kingdom National Vascular Database, we applied multiple imputation methodology together with stepwise model selection to generate preoperative and perioperative models of in-hospital mortality after AAA repair, using two thirds of the available data. Model performance was then assessed on the remaining third of the data by receiver operating characteristic curve analysis and compared with existing risk prediction models. Model calibration was assessed by Hosmer-Lemeshow analysis. A total of 8088 AAA repair operations were recorded in the National Vascular Database during the study period, of which 5870 (72.6%) were elective procedures. Both preoperative and perioperative models showed excellent discrimination, with areas under the receiver operating characteristic curve of .89 and .92, respectively. This was significantly better than any of the existing models (area under the receiver operating characteristic curve for best comparator model, .84 and .88; P < .001 and P = .001, respectively). Discrimination remained excellent when only elective procedures were considered. There was no evidence of miscalibration by Hosmer-Lemeshow analysis. We have developed accurate models to assess risk of in-hospital mortality after AAA repair. These models were carefully developed with rigorous statistical methodology and significantly outperform existing methods for both elective cases and overall AAA mortality. These models will be invaluable for both preoperative patient counseling and accurate risk adjustment of published outcome data. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  14. The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.

    2001-05-01

    We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.

  15. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  16. Design, Modelling and Teleoperation of a 2 mm Diameter Compliant Instrument for the da Vinci Platform.

    PubMed

    Francis, P; Eastwood, K W; Bodani, V; Looi, T; Drake, J M

    2018-05-07

    This work explores the feasibility of creating and accurately controlling an instrument for robotic surgery with a 2 mm diameter and a three degree-of-freedom (DoF) wrist which is compatible with the da Vinci platform. The instrument's wrist is composed of a two DoF bending notched-nitinol tube pattern, for which a kinematic model has been developed. A base mechanism for controlling the wrist is designed for integration with the da Vinci Research Kit. A basic teleoperation task is successfully performed using two of the miniature instruments. The performance and accuracy of the instrument suggest that creating and accurately controlling a 2 mm diameter instrument is feasible and the design and modelling proposed in this work provide a basis for future miniature instrument development.

  17. An Investigation of Jogging Biomechanics using the Full-Body Lumbar Spine Model: Model Development and Validation

    PubMed Central

    Raabe, Margaret E.; Chaudhari, Ajit M.W.

    2016-01-01

    The ability of a biomechanical simulation to produce results that can translate to real-life situations is largely dependent on the physiological accuracy of the musculoskeletal model. There are a limited number of freely-available, full-body models that exist in OpenSim, and those that do exist are very limited in terms of trunk musculature and degrees of freedom in the spine. Properly modeling the motion and musculature of the trunk is necessary to most accurately estimate lower extremity and spinal loading. The objective of this study was to develop and validate a more physiologically accurate OpenSim full-body model. By building upon three previously developed OpenSim models, the Full-Body Lumbar Spine (FBLS) model, comprised of 21 segments, 30 degrees-of-freedom, and 324 musculotendon actuators, was developed. The five lumbar vertebrae were modeled as individual bodies, and coupled constraints were implemented to describe the net motion of the spine. The eight major muscle groups of the lumbar spine were modeled (rectus abdominis, external and internal obliques, erector spinae, multifidus, quadratus lumborum, psoas major, and latissimus dorsi), and many of these muscle groups were modeled as multiple fascicles allowing the large muscles to act in multiple directions. The resulting FBLS model's trunk muscle geometry, maximal isometric joint moments, and simulated muscle activations compare well to experimental data. The FBLS model will be made freely available (https://simtk.org/home/fullbodylumbar) for others to perform additional analyses and develop simulations investigating full-body dynamics and contributions of the trunk muscles to dynamic tasks. PMID:26947033

  18. Finite element analysis of drilling in carbon fiber reinforced polymer composites

    NASA Astrophysics Data System (ADS)

    Phadnis, V. A.; Roy, A.; Silberschmidt, V. V.

    2012-08-01

    Carbon fiber reinforced polymer composite (CFRP) laminates are attractive for many applications in the aerospace industry especially as aircraft structural components due to their superior properties. Usually drilling is an important final machining process for components made of composite laminates. In drilling of CFRP, it is an imperative task to determine the maximum critical thrust forces that trigger inter-laminar and intra-laminar damage modes owing to highly anisotropic fibrous media; and negotiate integrity of composite structures. In this paper, a 3D finite element (FE) model of drilling in CFRP composite laminate is developed, which accurately takes into account the dynamic characteristics involved in the process along with the accurate geometrical considerations. A user defined material model is developed to account for accurate though thickness response of composite laminates. The average critical thrust forces and torques obtained using FE analysis, for a set of machining parameters are found to be in good agreement with the experimental results from literature.

  19. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  20. Organ-on-a-Chip Technology for Reproducing Multiorgan Physiology.

    PubMed

    Lee, Seung Hwan; Sung, Jong Hwan

    2018-01-01

    In the drug development process, the accurate prediction of drug efficacy and toxicity is important in order to reduce the cost, labor, and effort involved. For this purpose, conventional 2D cell culture models are used in the early phase of drug development. However, the differences between the in vitro and the in vivo systems have caused the failure of drugs in the later phase of the drug-development process. Therefore, there is a need for a novel in vitro model system that can provide accurate information for evaluating the drug efficacy and toxicity through a closer recapitulation of the in vivo system. Recently, the idea of using microtechnology for mimicking the microscale tissue environment has become widespread, leading to the development of "organ-on-a-chip." Furthermore, the system is further developed for realizing a multiorgan model for mimicking interactions between multiple organs. These advancements are still ongoing and are aimed at ultimately developing "body-on-a-chip" or "human-on-a-chip" devices for predicting the response of the whole body. This review summarizes recently developed organ-on-a-chip technologies, and their applications for reproducing multiorgan functions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Waters, Jiajia

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  2. Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model

    USGS Publications Warehouse

    Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,

    2005-01-01

    Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.

  3. Matching mice to malignancy: molecular subgroups and models of medulloblastoma

    PubMed Central

    Lau, Jasmine; Schmidt, Christin; Markant, Shirley L.; Taylor, Michael D.; Wechsler-Reya, Robert J.

    2012-01-01

    Introduction Medulloblastoma, the largest group of embryonal brain tumors, has historically been classified into five variants based on histopathology. More recently, epigenetic and transcriptional analyses of primary tumors have sub-classified medulloblastoma into four to six subgroups, most of which are incongruous with histopathological classification. Discussion Improved stratification is required for prognosis and development of targeted treatment strategies, to maximize cure and minimize adverse effects. Several mouse models of medulloblastoma have contributed both to an improved understanding of progression and to developmental therapeutics. In this review, we summarize the classification of human medulloblastoma subtypes based on histopathology and molecular features. We describe existing genetically engineered mouse models, compare these to human disease, and discuss the utility of mouse models for developmental therapeutics. Just as accurate knowledge of the correct molecular subtype of medulloblastoma is critical to the development of targeted therapy in patients, we propose that accurate modeling of each subtype of medulloblastoma in mice will be necessary for preclinical evaluation and optimization of those targeted therapies. PMID:22315164

  4. Combined electron beam imaging and ab initio modeling of T1 precipitates in Al-Li-Cu alloys

    NASA Astrophysics Data System (ADS)

    Dwyer, C.; Weyland, M.; Chang, L. Y.; Muddle, B. C.

    2011-05-01

    Among the many considerable challenges faced in developing a rational basis for advanced alloy design, establishing accurate atomistic models is one of the most fundamental. Here we demonstrate how advanced imaging techniques in a double-aberration-corrected transmission electron microscope, combined with ab initio modeling, have been used to determine the atomic structure of embedded 1 nm thick T1 precipitates in precipitation-hardened Al-Li-Cu aerospace alloys. The results provide an accurate determination of the controversial T1 structure, and demonstrate how next-generation techniques permit the characterization of embedded nanostructures in alloys and other nanostructured materials.

  5. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  6. Assessment of Erysiphe necator ascospore release models for use in the Mediterranean climate of western Oregon

    USDA-ARS?s Scientific Manuscript database

    Predictive models have been developed in several major grape growing regions to correlate environmental conditions to Erysiphe necator ascospore release; however, these models may not accurately predict ascospore release in other viticulture regions with differing climatic conditions. To assess asco...

  7. A Simple Model of Hox Genes: Bone Morphology Demonstration

    ERIC Educational Resources Information Center

    Shmaefsky, Brian

    2008-01-01

    Visual demonstrations of abstract scientific concepts are effective strategies for enhancing content retention (Shmaefsky 2004). The concepts associated with gene regulation of growth and development are particularly complex and are well suited for teaching with visual models. This demonstration provides a simple and accurate model of Hox gene…

  8. An Integrated Enrollment Forecast Model. IR Applications, Volume 15, January 18, 2008

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2008-01-01

    Enrollment forecasting is the central component of effective budget and program planning. The integrated enrollment forecast model is developed to achieve a better understanding of the variables affecting student enrollment and, ultimately, to perform accurate forecasts. The transfer function model of the autoregressive integrated moving average…

  9. Wheat mill stream properties for discrete element method modeling

    USDA-ARS?s Scientific Manuscript database

    A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...

  10. Parameterization of norfolk sandy loam properties for stochastic modeling of light in-wheel motor UGV

    USDA-ARS?s Scientific Manuscript database

    To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...

  11. Validation of Models Used to Inform Colorectal Cancer Screening Guidelines: Accuracy and Implications.

    PubMed

    Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris

    2016-07-01

    Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.

  12. Application of a Third Order Upwind Scheme to Viscous Flow over Clean and Iced Wings

    NASA Technical Reports Server (NTRS)

    Bangalore, A.; Phaengsook, N.; Sankar, L. N.

    1994-01-01

    A 3-D compressible Navier-Stokes solver has been developed and applied to 3-D viscous flow over clean and iced wings. This method uses a third order accurate finite volume scheme with flux difference splitting to model the inviscid fluxes, and second order accurate symmetric differences to model the viscous terms. The effects of turbulence are modeled using a Kappa-epsilon model. In the vicinity of the sold walls the kappa and epsilon values are modeled using Gorski's algebraic model. Sampling results are presented for surface pressure distributions, for untapered swept clean and iced wings made of NACA 0012 airfoil sections. The leading edge of these sections is modified using a simulated ice shape. Comparisons with experimental data are given.

  13. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  14. Chemically reacting supersonic flow calculation using an assumed PDF model

    NASA Technical Reports Server (NTRS)

    Farshchi, M.

    1990-01-01

    This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.

  15. High-Frequency Sound Interaction with Ocean Sediments and with Objects in the Vicinity of the Water/Sediment Interface and Mid-Frequency Shallow Water Propagation and Scattering

    DTIC Science & Technology

    2007-09-30

    combined with measured sediment properties, to test the validity of sediment acoustic models , and in particular the poroelastic (Biot) model . Addressing...TERM GOALS 1. Development of accurate models for acoustic scattering from, penetration into, and propagation within shallow water ocean sediments...2. Development of reliable methods for modeling acoustic detection of buried objects at subcritical grazing angles. 3. Improving our

  16. Integrating Growth Variability of the Ilium, Fifth Lumbar Vertebra, and Clavicle with Multivariate Adaptive Regression Splines Models for Subadult Age Estimation.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal

    2018-05-31

    Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.

  17. Numerical simulation of dune-flat bed transition and stage‐discharge relationship with hysteresis effect

    USGS Publications Warehouse

    Shimizu, Yasuyuki; Giri, Sanjay; Yamaguchi, Satomi; Nelson, Jonathan M.

    2009-01-01

    This work presents recent advances on morphodynamic modeling of bed forms under unsteady discharge. This paper includes further development of a morphodynamic model proposed earlier by Giri and Shimizu (2006a). This model reproduces the temporal development of river dunes and accurately replicates the physical properties associated with bed form evolution. Model results appear to provide accurate predictions of bed form geometry and form drag over bed forms for arbitrary steady flows. However, accurate predictions of temporal changes of form drag are key to the prediction of stage‐discharge relation during flood events. Herein, the model capability is extended to replicate the dune–flat bed transition, and in turn, the variation of form drag produced by the temporal growth or decay of bed forms under unsteady flow conditions. Some numerical experiments are performed to analyze hysteresis of the stage‐discharge relationship caused by the transition between dune and flat bed regimes during rising and falling stages of varying flows. The numerical model successfully simulates dune–flat bed transition and the associated hysteresis of the stage‐discharge relationship; this is in good agreement with physical observations but has been treated in the past only using empirical methods. A hypothetical relationship for a sediment parameter (the mean step length) is proposed to a first level of approximation that enables reproduction of the dune–flat bed transition. The proposed numerical model demonstrates its ability to address an important practical problem associated with bed form evolution and flow resistance in varying flows.

  18. Modeling of interaction between steel and concrete in continuously reinforced concrete pavements : final report.

    DOT National Transportation Integrated Search

    2016-01-01

    Continuously reinforced concrete pavement (CRCP) contains continuous longitudinal reinforcement with no transverse : expansion within the early life of the pavement and can continue to develop cracks in the long-term. The : accurate modeling of CRCPs...

  19. A phenology model for Sparganothis fruitworm in Cranberries

    USDA-ARS?s Scientific Manuscript database

    Larvae of Sparganothis sulfureana Clemens, frequently attack cranberries, often resulting in economic damage to the crop. Because temperature dictates insect growth rate, development can be accurately estimated based on daily temperature measurements. To better predict S. sulfureana development acro...

  20. Characterization of xenon ion and neutral interactions in a well-characterized experiment

    NASA Astrophysics Data System (ADS)

    Patino, Marlene I.; Wirz, Richard E.

    2018-06-01

    Interactions between fast ions and slow neutral atoms are commonly dominated by charge-exchange and momentum-exchange collisions, which are important to understanding and simulating the performance and behavior of many plasma devices. To investigate these interactions, this work developed a simple, well-characterized experiment that accurately measures the behavior of high energy xenon ions incident on a background of xenon neutral atoms. By using well-defined operating conditions and a simple geometry, these results serve as canonical data for the development and validation of plasma models and models of neutral beam sources that need to ensure accurate treatment of angular scattering distributions of charge-exchange and momentum-exchange ions and neutrals. The energies used in this study are relevant for electric propulsion devices ˜1.5 keV and can be used to improve models of ion-neutral interactions in the plume. By comparing these results to both analytical and computational models of ion-neutral interactions, we discovered the importance of (1) accurately treating the differential cross-sections for momentum-exchange and charge-exchange collisions over a large range of neutral background pressures and (2) properly considering commonly overlooked interactions, such as ion-induced electron emission from nearby surfaces and neutral-neutral ionization collisions.

  1. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.

  2. Mortality Probability Model III and Simplified Acute Physiology Score II

    PubMed Central

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  3. Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints

    NASA Astrophysics Data System (ADS)

    Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent

    2016-10-01

    In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.

  4. Modelling and Manufacturing of a 3D Printed Trachea for Cricothyroidotomy Simulation.

    PubMed

    Doucet, Gregory; Ryan, Stephen; Bartellas, Michael; Parsons, Michael; Dubrowski, Adam; Renouf, Tia

    2017-08-18

    Cricothyroidotomy is a life-saving medical procedure that allows for tracheal intubation. Most current cricothyroidotomy simulation models are either expensive or not anatomically accurate and provide the learner with an unrealistic simulation experience. The goal of this project is to improve current simulation techniques by utilizing rapid prototyping using 3D printing technology and expert opinions to develop inexpensive and anatomically accurate trachea simulators. In doing so, emergency cricothyroidotomy simulation can be made accessible, accurate, cost-effective and reproducible. Three-dimensional modelling software was used in conjunction with a desktop three-dimensional (3D) printer to design and manufacture an anatomically accurate model of the cartilage within the trachea (thyroid cartilage, cricoid cartilage, and the tracheal rings). The initial design was based on dimensions found in studies of tracheal anatomical configuration. This ensured that the landmarking necessary for emergency cricothyroidotomies was designed appropriately. Several revisions of the original model were made based on informal opinion from medical professionals to establish appropriate anatomical accuracy of the model for use in rural/remote cricothyroidotomy simulation. Using an entry-level desktop 3D printer, a low cost tracheal model was successfully designed that can be printed in less than three hours for only $1.70 Canadian dollars (CAD). Due to its anatomical accuracy, flexibility and durability, this model is great for use in emergency medicine simulation training. Additionally, the model can be assembled in conjunction with a membrane to simulate tracheal ligaments. Skin has been simulated as well to enhance the realism of the model. The result is an accurate simulation that will provide users with an anatomically correct model to practice important skills used in emergency airway surgery, specifically landmarking, incision and intubation. This design is a novel and easy to manufacture and reproduce, high fidelity trachea model that can be used by educators with limited resources.

  5. Modelling and Manufacturing of a 3D Printed Trachea for Cricothyroidotomy Simulation

    PubMed Central

    Ryan, Stephen; Bartellas, Michael; Parsons, Michael; Dubrowski, Adam; Renouf, Tia

    2017-01-01

    Cricothyroidotomy is a life-saving medical procedure that allows for tracheal intubation. Most current cricothyroidotomy simulation models are either expensive or not anatomically accurate and provide the learner with an unrealistic simulation experience. The goal of this project is to improve current simulation techniques by utilizing rapid prototyping using 3D printing technology and expert opinions to develop inexpensive and anatomically accurate trachea simulators. In doing so, emergency cricothyroidotomy simulation can be made accessible, accurate, cost-effective and reproducible. Three-dimensional modelling software was used in conjunction with a desktop three-dimensional (3D) printer to design and manufacture an anatomically accurate model of the cartilage within the trachea (thyroid cartilage, cricoid cartilage, and the tracheal rings). The initial design was based on dimensions found in studies of tracheal anatomical configuration. This ensured that the landmarking necessary for emergency cricothyroidotomies was designed appropriately. Several revisions of the original model were made based on informal opinion from medical professionals to establish appropriate anatomical accuracy of the model for use in rural/remote cricothyroidotomy simulation. Using an entry-level desktop 3D printer, a low cost tracheal model was successfully designed that can be printed in less than three hours for only $1.70 Canadian dollars (CAD). Due to its anatomical accuracy, flexibility and durability, this model is great for use in emergency medicine simulation training. Additionally, the model can be assembled in conjunction with a membrane to simulate tracheal ligaments. Skin has been simulated as well to enhance the realism of the model. The result is an accurate simulation that will provide users with an anatomically correct model to practice important skills used in emergency airway surgery, specifically landmarking, incision and intubation. This design is a novel and easy to manufacture and reproduce, high fidelity trachea model that can be used by educators with limited resources. PMID:29057187

  6. Quasi-steady aerodynamic model of clap-and-fling flapping MAV and validation using free-flight data.

    PubMed

    Armanini, S F; Caetano, J V; Croon, G C H E de; Visser, C C de; Mulder, M

    2016-06-30

    Flapping-wing aerodynamic models that are accurate, computationally efficient and physically meaningful, are challenging to obtain. Such models are essential to design flapping-wing micro air vehicles and to develop advanced controllers enhancing the autonomy of such vehicles. In this work, a phenomenological model is developed for the time-resolved aerodynamic forces on clap-and-fling ornithopters. The model is based on quasi-steady theory and accounts for inertial, circulatory, added mass and viscous forces. It extends existing quasi-steady approaches by: including a fling circulation factor to account for unsteady wing-wing interaction, considering real platform-specific wing kinematics and different flight regimes. The model parameters are estimated from wind tunnel measurements conducted on a real test platform. Comparison to wind tunnel data shows that the model predicts the lift forces on the test platform accurately, and accounts for wing-wing interaction effectively. Additionally, validation tests with real free-flight data show that lift forces can be predicted with considerable accuracy in different flight regimes. The complete parameter-varying model represents a wide range of flight conditions, is computationally simple, physically meaningful and requires few measurements. It is therefore potentially useful for both control design and preliminary conceptual studies for developing new platforms.

  7. Assessment of driver stopping prediction models before and after the onset of yellow using two driving simulator datasets.

    PubMed

    Ghanipoor Machiani, Sahar; Abbas, Montasir

    2016-11-01

    Accurate modeling of driver decisions in dilemma zones (DZ), where drivers are not sure whether to stop or go at the onset of yellow, can be used to increase safety at signalized intersections. This study utilized data obtained from two different driving simulator studies (VT-SCORES and NADS datasets) to investigate the possibility of developing accurate driver-decision prediction/classification models in DZ. Canonical discriminant analysis was used to construct the prediction models, and two timeframes were considered. The first timeframe used data collected during green immediately before the onset of yellow, and the second timeframe used data collected during the first three seconds after the onset of yellow. Signal protection algorithms could use the results of the prediction model during the first timeframe to decide the best time for ending the green signal, and could use the results of the prediction model during the first three seconds of yellow to extend the clearance interval. It was found that the discriminant model using data collected during the first three seconds of yellow was the most accurate, at 99% accuracy. It was also found that data collection should focus on variables that are related to speed, acceleration, time, and distance to intersection, as opposed to secondary variables, such as pavement conditions, since secondary variables did not significantly change the accuracy of the prediction models. The results reveal a promising possibility for incorporating the developed models in traffic-signal controllers to improve DZ-protection strategies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Combined electrochemical, heat generation, and thermal model for large prismatic lithium-ion batteries in real-time applications

    NASA Astrophysics Data System (ADS)

    Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid

    2017-08-01

    Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].

  9. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Sadus, Richard J.

    2017-06-01

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  10. Predicting vapor-liquid phase equilibria with augmented ab initio interatomic potentials.

    PubMed

    Vlasiuk, Maryna; Sadus, Richard J

    2017-06-28

    The ability of ab initio interatomic potentials to accurately predict vapor-liquid phase equilibria is investigated. Monte Carlo simulations are reported for the vapor-liquid equilibria of argon and krypton using recently developed accurate ab initio interatomic potentials. Seventeen interatomic potentials are studied, formulated from different combinations of two-body plus three-body terms. The simulation results are compared to either experimental or reference data for conditions ranging from the triple point to the critical point. It is demonstrated that the use of ab initio potentials enables systematic improvements to the accuracy of predictions via the addition of theoretically based terms. The contribution of three-body interactions is accounted for using the Axilrod-Teller-Muto plus other multipole contributions and the effective Marcelli-Wang-Sadus potentials. The results indicate that the predictive ability of recent interatomic potentials, obtained from quantum chemical calculations, is comparable to that of accurate empirical models. It is demonstrated that the Marcelli-Wang-Sadus potential can be used in combination with accurate two-body ab initio models for the computationally inexpensive and accurate estimation of vapor-liquid phase equilibria.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less

  12. Variational asymptotic modeling of composite dimensionally reducible structures

    NASA Astrophysics Data System (ADS)

    Yu, Wenbin

    A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and shells. Numerical results are compared with the exact solutions, and the excellent agreement proves that one can use VAPAS to analyze composite plates and shells efficiently and accurately. In conclusion, rigorous modeling approaches were developed for composite beams, plates and shells within a general framework. No such consistent and general treatment is found in the literature. The associated computer programs VABS and VAPAS are envisioned to have many applications in industry.

  13. Predicting post-fire tree mortality for 12 western US conifers using the First-Order Fire Effects Model (FOFEM)

    Treesearch

    Sharon Hood; Duncan Lutes

    2017-01-01

    Accurate prediction of fire-caused tree mortality is critical for making sound land management decisions such as developing burning prescriptions and post-fire management guidelines. To improve efforts to predict post-fire tree mortality, we developed 3-year post-fire mortality models for 12 Western conifer species - white fir (Abies concolor [Gord. &...

  14. Analysis of GALE (Genesis of Atlantic Lows Experiment) Data

    DTIC Science & Technology

    1989-12-01

    being developed to accurately simulate and study the development of extratropical cyclones, which rapidly develop off the east coast of the U.S. and the...the model for the simulation of GALE storms . \\SAIC has worked with the NRL staff in the development of initialization schemes, includir.g a vertical...at the 6th Extratropical Cyclone Workshop of the American Meteorological Society in Monterey, CA, June, 1987, entitled "A Model for the Simulation of

  15. Three new models for evaluation of standard involute spur gear mesh stiffness

    NASA Astrophysics Data System (ADS)

    Liang, Xihui; Zhang, Hongsheng; Zuo, Ming J.; Qin, Yong

    2018-02-01

    Time-varying mesh stiffness is one of the main internal excitation sources of gear dynamics. Accurate evaluation of gear mesh stiffness is crucial for gear dynamic analysis. This study is devoted to developing new models for spur gear mesh stiffness evaluation. Three models are proposed. The proposed model 1 can give very accurate mesh stiffness result but the gear bore surface must be assumed to be rigid. Enlighted by the proposed model 1, our research discovers that the angular deflection pattern of the gear bore surface of a pair of meshing gears under a constant torque basically follows a cosine curve. Based on this finding, two other models are proposed. The proposed model 2 evaluates gear mesh stiffness by using angular deflections at different circumferential angles of an end surface circle of the gear bore. The proposed model 3 requires using only the angular deflection at an arbitrary circumferential angle of an end surface circle of the gear bore but this model can only be used for a gear with the same tooth profile among all teeth. The proposed models are accurate in gear mesh stiffness evaluation and easy to use. Finite element analysis is used to validate the accuracy of the proposed models.

  16. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy

    DOE PAGES

    Payne, Courtney E.; Wolfrum, Edward J.

    2015-03-12

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less

  17. Rapid analysis of composition and reactivity in cellulosic biomass feedstocks with near-infrared spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payne, Courtney E.; Wolfrum, Edward J.

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less

  18. A crystal plasticity model for slip in hexagonal close packed metals based on discrete dislocation simulations

    NASA Astrophysics Data System (ADS)

    Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.

    2017-06-01

    This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.

  19. Fundamental Algorithms of the Goddard Battery Model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1985-01-01

    The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.

  20. Numerical simulation of magmatic hydrothermal systems

    USGS Publications Warehouse

    Ingebritsen, S.E.; Geiger, S.; Hurwitz, S.; Driesner, T.

    2010-01-01

    The dynamic behavior of magmatic hydrothermal systems entails coupled and nonlinear multiphase flow, heat and solute transport, and deformation in highly heterogeneous media. Thus, quantitative analysis of these systems depends mainly on numerical solution of coupled partial differential equations and complementary equations of state (EOS). The past 2 decades have seen steady growth of computational power and the development of numerical models that have eliminated or minimized the need for various simplifying assumptions. Considerable heuristic insight has been gained from process-oriented numerical modeling. Recent modeling efforts employing relatively complete EOS and accurate transport calculations have revealed dynamic behavior that was damped by linearized, less accurate models, including fluid property control of hydrothermal plume temperatures and three-dimensional geometries. Other recent modeling results have further elucidated the controlling role of permeability structure and revealed the potential for significant hydrothermally driven deformation. Key areas for future reSearch include incorporation of accurate EOS for the complete H2O-NaCl-CO2 system, more realistic treatment of material heterogeneity in space and time, realistic description of large-scale relative permeability behavior, and intercode benchmarking comparisons. Copyright 2010 by the American Geophysical Union.

  1. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  2. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  4. ERS-1 and Seasat scatterometer measurements of ocean winds: Model functions and the directional distribution of short waves

    NASA Technical Reports Server (NTRS)

    Freilich, Michael H.; Dunbar, R. Scott

    1993-01-01

    Calculation of accurate vector winds from scatterometers requires knowledge of the relationship between backscatter cross-section and the geophysical variable of interest. As the detailed dynamics of wind generation of centimetric waves and radar-sea surface scattering at moderate incidence angles are not well known, empirical scatterometer model functions relating backscatter to winds must be developed. Less well appreciated is the fact that, given an accurate model function and some knowledge of the dominant scattering mechanisms, significant information on the amplitudes and directional distributions of centimetric roughness elements on the sea surface can be inferred. accurate scatterometer model functions can thus be used to investigate wind generation of short waves under realistic conditions. The present investigation involves developing an empirical model function for the C-band (5.3 GHz) ERS-1 scatterometer and comparing Ku-band model functions with the C-band model to infer information on the two-dimensional spectrum of centimetric roughness elements in the ocean. The C-band model function development is based on collocations of global backscatter measurements with operational surface analyses produced by meteorological agencies. Strengths and limitations of the method are discussed, and the resulting model function is validated in part through comparison with the actual distributions of backscatter cross-section triplets. Details of the directional modulation as well as the wind speed sensitivity at C-band are investigated. Analysis of persistent outliers in the data is used to infer the magnitudes of non-wind effects (such as atmospheric stratification, swell, etc.). The ERS-1 C-band instrument and the Seasat Ku-band (14.6 GHz) scatterometer both imaged waves of approximately 3.4 cm wavelength assuming that Bragg scattering is the dominant mechanism. Comparisons of the C-band and Ku-band model functions are used both to test the validity of the postulated Bragg mechanism and to investigate the directional distribution of the imaged waves under a variety of conditions where Bragg scatter is dominant.

  5. Prediction of essential oil content of oregano by hand-held and Fourier transform NIR spectroscopy.

    PubMed

    Camps, Cédric; Gérard, Marianne; Quennoz, Mélanie; Brabant, Cécile; Oberson, Carine; Simonnet, Xavier

    2014-05-01

    In the framework of a breeding programme, the analysis of hundreds of oregano samples to determine their essential oil content (EOC) is time-consuming and expensive in terms of labour. Therefore developing a new method that is rapid, accurate and less expensive to use would be an asset to breeders. The aim of the present study was to develop a method based on near-inrared (NIR) spectroscopy to determine the EOC of oregano dried powder. Two spectroscopic approaches were compared, the first using a hand-held NIR device and the second a Fourier transform (FT) NIR spectrometer. Hand-held NIR (1000-1800 nm) measurements and partial least squares regression allowed the determination of EOC with R² and SEP values of 0.58 and 0.81 mL per 100 g dry matter (DM) respectively. Measurements with FT-NIR (1000-2500 nm) allowed the determination of EOC with R² and SEP values of 0.91 and 0.68 mL per 100 g DM respectively. RPD, RER and RPIQ values for the model implemented with FT-NIR data were satisfactory for screening application, while those obtained with hand-held NIR data were below the level required to consider the model as enough accurate for screening application. The FT-NIR approach allowed the development of an accurate model for EOC prediction. Although the hand-held NIR approach is promising, it needs additional development before it can be used in practice. © 2013 Society of Chemical Industry.

  6. An investigation of jogging biomechanics using the full-body lumbar spine model: Model development and validation.

    PubMed

    Raabe, Margaret E; Chaudhari, Ajit M W

    2016-05-03

    The ability of a biomechanical simulation to produce results that can translate to real-life situations is largely dependent on the physiological accuracy of the musculoskeletal model. There are a limited number of freely-available, full-body models that exist in OpenSim, and those that do exist are very limited in terms of trunk musculature and degrees of freedom in the spine. Properly modeling the motion and musculature of the trunk is necessary to most accurately estimate lower extremity and spinal loading. The objective of this study was to develop and validate a more physiologically accurate OpenSim full-body model. By building upon three previously developed OpenSim models, the full-body lumbar spine (FBLS) model, comprised of 21 segments, 30 degrees-of-freedom, and 324 musculotendon actuators, was developed. The five lumbar vertebrae were modeled as individual bodies, and coupled constraints were implemented to describe the net motion of the spine. The eight major muscle groups of the lumbar spine were modeled (rectus abdominis, external and internal obliques, erector spinae, multifidus, quadratus lumborum, psoas major, and latissimus dorsi), and many of these muscle groups were modeled as multiple fascicles allowing the large muscles to act in multiple directions. The resulting FBLS model׳s trunk muscle geometry, maximal isometric joint moments, and simulated muscle activations compare well to experimental data. The FBLS model will be made freely available (https://simtk.org/home/fullbodylumbar) for others to perform additional analyses and develop simulations investigating full-body dynamics and contributions of the trunk muscles to dynamic tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

  8. The use of machine learning for the identification of peripheral artery disease and future mortality risk.

    PubMed

    Ross, Elsie Gyang; Shah, Nigam H; Dalman, Ronald L; Nead, Kevin T; Cooke, John P; Leeper, Nicholas J

    2016-11-01

    A key aspect of the precision medicine effort is the development of informatics tools that can analyze and interpret "big data" sets in an automated and adaptive fashion while providing accurate and actionable clinical information. The aims of this study were to develop machine learning algorithms for the identification of disease and the prognostication of mortality risk and to determine whether such models perform better than classical statistical analyses. Focusing on peripheral artery disease (PAD), patient data were derived from a prospective, observational study of 1755 patients who presented for elective coronary angiography. We employed multiple supervised machine learning algorithms and used diverse clinical, demographic, imaging, and genomic information in a hypothesis-free manner to build models that could identify patients with PAD and predict future mortality. Comparison was made to standard stepwise linear regression models. Our machine-learned models outperformed stepwise logistic regression models both for the identification of patients with PAD (area under the curve, 0.87 vs 0.76, respectively; P = .03) and for the prediction of future mortality (area under the curve, 0.76 vs 0.65, respectively; P = .10). Both machine-learned models were markedly better calibrated than the stepwise logistic regression models, thus providing more accurate disease and mortality risk estimates. Machine learning approaches can produce more accurate disease classification and prediction models. These tools may prove clinically useful for the automated identification of patients with highly morbid diseases for which aggressive risk factor management can improve outcomes. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  9. Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle

    NASA Astrophysics Data System (ADS)

    Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.

    2017-04-01

    Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.

  10. Growth and yield in Eucalyptus globulus

    Treesearch

    James A. Rinehart; Richard B. Standiford

    1983-01-01

    A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...

  11. Estimation of phosphorous loss from agricultural land in the Heartland region of the U.S.A. using the APEX model

    USDA-ARS?s Scientific Manuscript database

    Accurate phosphorus (P) loss estimation from agricultural land is important for development of best management practices and protection of water quality. The Agricultural Policy/Environmental Extender (APEX) model is a powerful simulation model designed to simulate edge-of-field water, sediment, an...

  12. Development of an Agricultural Fertilizer Modeling System for Bi-Directional Ammonia Fluxes in the Community Multiscale Air Quality (CMAQ) Model

    EPA Science Inventory

    Atmospheric ammonia (NH3) plays an important role in fine-mode aerosol formation. Accurate estimates of ammonia from both human and natural emissions can reduce uncertainties in air quality modeling. The majority of ammonia anthropogenic emissions come from the agricul...

  13. Fuel consumption models for pine flatwoods fuel types in the southeastern United States

    Treesearch

    Clinton S. Wright

    2013-01-01

    Modeling fire effects, including terrestrial and atmospheric carbon fluxes and pollutant emissions during wildland fires, requires accurate predictions of fuel consumption. Empirical models were developed for predicting fuel consumption from fuel and environmental measurements on a series of operational prescribed fires in pine flatwoods ecosystems in the southeastern...

  14. Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects

    NASA Astrophysics Data System (ADS)

    Jarndal, Anwar; Ghannouchi, Fadhel M.

    2016-09-01

    In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.

  15. Advanced Space Propulsion System Flowfield Modeling

    NASA Technical Reports Server (NTRS)

    Smith, Sheldon

    1998-01-01

    Solar thermal upper stage propulsion systems currently under development utilize small low chamber pressure/high area ratio nozzles. Consequently, the resulting flow in the nozzle is highly viscous, with the boundary layer flow comprising a significant fraction of the total nozzle flow area. Conventional uncoupled flow methods which treat the nozzle boundary layer and inviscid flowfield separately by combining the two calculations via the influence of the boundary layer displacement thickness on the inviscid flowfield are not accurate enough to adequately treat highly viscous nozzles. Navier Stokes models such as VNAP2 can treat these flowfields but cannot perform a vacuum plume expansion for applications where the exhaust plume produces induced environments on adjacent structures. This study is built upon recently developed artificial intelligence methods and user interface methodologies to couple the VNAP2 model for treating viscous nozzle flowfields with a vacuum plume flowfield model (RAMP2) that is currently a part of the Plume Environment Prediction (PEP) Model. This study integrated the VNAP2 code into the PEP model to produce an accurate, practical and user friendly tool for calculating highly viscous nozzle and exhaust plume flowfields.

  16. Development of a Physiologically-Based Pharmacokinetic Model of the Rat Central Nervous System

    PubMed Central

    Badhan, Raj K. Singh; Chenel, Marylore; Penny, Jeffrey I.

    2014-01-01

    Central nervous system (CNS) drug disposition is dictated by a drug’s physicochemical properties and its ability to permeate physiological barriers. The blood–brain barrier (BBB), blood-cerebrospinal fluid barrier and centrally located drug transporter proteins influence drug disposition within the central nervous system. Attainment of adequate brain-to-plasma and cerebrospinal fluid-to-plasma partitioning is important in determining the efficacy of centrally acting therapeutics. We have developed a physiologically-based pharmacokinetic model of the rat CNS which incorporates brain interstitial fluid (ISF), choroidal epithelial and total cerebrospinal fluid (CSF) compartments and accurately predicts CNS pharmacokinetics. The model yielded reasonable predictions of unbound brain-to-plasma partition ratio (Kpuu,brain) and CSF:plasma ratio (CSF:Plasmau) using a series of in vitro permeability and unbound fraction parameters. When using in vitro permeability data obtained from L-mdr1a cells to estimate rat in vivo permeability, the model successfully predicted, to within 4-fold, Kpuu,brain and CSF:Plasmau for 81.5% of compounds simulated. The model presented allows for simultaneous simulation and analysis of both brain biophase and CSF to accurately predict CNS pharmacokinetics from preclinical drug parameters routinely available during discovery and development pathways. PMID:24647103

  17. Team deliberate practice in medicine and related domains: a consideration of the issues.

    PubMed

    Harris, Kevin R; Eccles, David W; Shatzer, John H

    2017-03-01

    A better understanding of the factors influencing medical team performance and accounting for expert medical team performance should benefit medical practice. Therefore, the aim here is to highlight key issues with using deliberate practice to improve medical team performance, especially given the success of deliberate practice for developing individual expert performance in medicine and other domains. Highlighting these issues will inform the development of training for medical teams. The authors first describe team coordination and its critical role in medical teams. Presented next are the cognitive mechanisms that allow expert performers to accurately interpret the current situation via the creation of an accurate mental "model" of the current situation, known as a situation model. Following this, the authors propose that effective team performance depends at least in part on team members having similar models of the situation, known as a shared situation model. The authors then propose guiding principles for implementing team deliberate practice in medicine and describe how team deliberate practice can be used in an attempt to reduce barriers inherent in medical teams to the development of shared situation models. The paper concludes with considerations of limitations, and future research directions, concerning the implementation of team deliberate practice within medicine.

  18. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  19. Prediction Models for 30-Day Mortality and Complications After Total Knee and Hip Arthroplasties for Veteran Health Administration Patients With Osteoarthritis.

    PubMed

    Harris, Alex Hs; Kuo, Alfred C; Bowe, Thomas; Gupta, Shalini; Nordin, David; Giori, Nicholas J

    2018-05-01

    Statistical models to preoperatively predict patients' risk of death and major complications after total joint arthroplasty (TJA) could improve the quality of preoperative management and informed consent. Although risk models for TJA exist, they have limitations including poor transparency and/or unknown or poor performance. Thus, it is currently impossible to know how well currently available models predict short-term complications after TJA, or if newly developed models are more accurate. We sought to develop and conduct cross-validation of predictive risk models, and report details and performance metrics as benchmarks. Over 90 preoperative variables were used as candidate predictors of death and major complications within 30 days for Veterans Health Administration patients with osteoarthritis who underwent TJA. Data were split into 3 samples-for selection of model tuning parameters, model development, and cross-validation. C-indexes (discrimination) and calibration plots were produced. A total of 70,569 patients diagnosed with osteoarthritis who received primary TJA were included. C-statistics and bootstrapped confidence intervals for the cross-validation of the boosted regression models were highest for cardiac complications (0.75; 0.71-0.79) and 30-day mortality (0.73; 0.66-0.79) and lowest for deep vein thrombosis (0.59; 0.55-0.64) and return to the operating room (0.60; 0.57-0.63). Moderately accurate predictive models of 30-day mortality and cardiac complications after TJA in Veterans Health Administration patients were developed and internally cross-validated. By reporting model coefficients and performance metrics, other model developers can test these models on new samples and have a procedure and indication-specific benchmark to surpass. Published by Elsevier Inc.

  20. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  1. Development and Validation of a Disease Severity Scoring Model for Pediatric Sepsis.

    PubMed

    Hu, Li; Zhu, Yimin; Chen, Mengshi; Li, Xun; Lu, Xiulan; Liang, Ying; Tan, Hongzhuan

    2016-07-01

    Multiple severity scoring systems have been devised and evaluated in adult sepsis, but a simplified scoring model for pediatric sepsis has not yet been developed. This study aimed to develop and validate a new scoring model to stratify the severity of pediatric sepsis, thus assisting the treatment of sepsis in children. Data from 634 consecutive patients who presented with sepsis at Children's hospital of Hunan province in China in 2011-2013 were analyzed, with 476 patients placed in training group and 158 patients in validation group. Stepwise discriminant analysis was used to develop the accurate discriminate model. A simplified scoring model was generated using weightings defined by the discriminate coefficients. The discriminant ability of the model was tested by receiver operating characteristic curves (ROC). The discriminant analysis showed that prothrombin time, D-dimer, total bilirubin, serum total protein, uric acid, PaO2/FiO2 ratio, myoglobin were associated with severity of sepsis. These seven variables were assigned with values of 4, 3, 3, 4, 3, 3, 3 respectively based on the standardized discriminant coefficients. Patients with higher scores had higher risk of severe sepsis. The areas under ROC (AROC) were 0.836 for accurate discriminate model, and 0.825 for simplified scoring model in validation group. The proposed disease severity scoring model for pediatric sepsis showed adequate discriminatory capacity and sufficient accuracy, which has important clinical significance in evaluating the severity of pediatric sepsis and predicting its progress.

  2. The Jupiter ONERA Electron (JOE) and Jupiter ONERA Proton (JOP) specification models

    NASA Astrophysics Data System (ADS)

    Bourdarie, S.; Sicard-Piet, A.

    2008-09-01

    The use of recent improvement in the understanding of the Jovian radiation belt structure has allowed to develop a more accurate engineering model of the Jovian electron and proton radiation belts. The basic idea was to combine the results of the Salammbô code when available (for proton and electron species) with the Divine and Garret model 1983 and/or with GIRE. The advantage of such an approach was that the resulting model is global in term of spatial and energy coverage, is optimised inside Europa orbit (the Divine and Garret model is not accurate inside Io orbit due to poor in-situ data there - note that inside Io is the region where ionizing radiation fluxes are maximum) and take advantage of the two models. The resulting JOE-JOP models will be presented, pro and cons will be listed and commented. Finally future plans to upgrade these models will be given.

  3. Dynamic modeling of sludge compaction and consolidation processes in wastewater secondary settling tanks.

    PubMed

    Abusam, A; Keesman, K J

    2009-01-01

    The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.

  4. A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.

    2006-01-01

    The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.

  5. Performance characterization of complex fuel port geometries for hybrid rocket fuel grains

    NASA Astrophysics Data System (ADS)

    Bath, Andrew

    This research investigated the 3D printing and burning of fuel grains with complex geometry and the development of software capable of modeling and predicting the regression of a cross-section of these complex fuel grains. The software developed did predict the geometry to a fair degree of accuracy, especially when enhanced corner rounding was turned on. The model does have some drawbacks, notably being relatively slow, and does not perfectly predict the regression. If corner rounding is turned off, however, the model does become much faster; although less accurate, this method does still predict a relatively accurate resulting burn geometry, and is fast enough to be used for performance-tuning or genetic algorithms. In addition to the modeling method, preliminary investigations into the burning behavior of fuel grains with a helical flow path were performed. The helix fuel grains have a regression rate of nearly 3 times that of any other fuel grain geometry, primarily due to the enhancement of the friction coefficient between the flow and flow path.

  6. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  7. The solidification velocity of nickel and titanium alloys

    NASA Astrophysics Data System (ADS)

    Altgilbers, Alex Sho

    2002-09-01

    The solidification velocity of several Ni-Ti, Ni-Sn, Ni-Si, Ti-Al and Ti-Ni alloys were measured as a function of undercooling. From these results, a model for alloy solidification was developed that can be used to predict the solidification velocity as a function of undercooling more accurately. During this investigation a phenomenon was observed in the solidification velocity that is a direct result of the addition of the various alloying elements to nickel and titanium. The additions of the alloying elements resulted in an additional solidification velocity plateau at intermediate undercoolings. Past work has shown a solidification velocity plateau at high undercoolings can be attributed to residual oxygen. It is shown that a logistic growth model is a more accurate model for predicting the solidification of alloys. Additionally, a numerical model is developed from simple description of the effect of solute on the solidification velocity, which utilizes a Boltzmann logistic function to predict the plateaus that occur at intermediate undercoolings.

  8. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  9. Nursing Assessment Tool for People With Liver Cirrhosis

    PubMed Central

    Reis, Renata Karina; da Silva, Patrícia Costa dos Santos; Silva, Ana Elisa Bauer de Camargo; Atila, Elisabeth

    2016-01-01

    The aim of this study was to describe the process of developing a nursing assessment tool for hospitalized adult patients with liver cirrhosis. A descriptive study was carried out in three stages. First, we conducted a literature review to develop a data collection tool on the basis of the Conceptual Model of Wanda Horta. Second, the data collection tool was assessed through an expert panel. Third, we conducted the pilot testing in hospitalized patients. Most of the comments offered by the panel members were accepted to improve the tool. The final version was in the form of a questionnaire with open-closed questions. The panel members concluded that the tool was useful for accurate nursing diagnosis. Horta's Conceptual Model assisted with the development of this data collection tool to help nurses identify accurate nursing diagnosis in hospitalized patients with liver cirrhosis. We hope that the tool can be used by all nurses in clinical practice. PMID:26425862

  10. A reflection model for eclipsing binary stars

    NASA Technical Reports Server (NTRS)

    Wood, D. B.

    1973-01-01

    A highly accurate reflection model has been developed which emphasizes efficiency of computer calculation. It is assumed that the heating of the irradiated star must depend upon the following properties of the irradiating star: (1) effective temperature; (2) apparent area as seen from a point on the surface of the irradiated star; (3) limb darkening; and (4) zenith distance of the apparent centre as seen from a point on the surface of the irradiated star. The algorithm eliminates the need to integrate over the irradiating star while providing a highly accurate representation of the integrated bolometric flux, even for gravitationally distorted stars.

  11. Hypersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Chow, Chuen-Yen; Ryan, James S.

    1987-01-01

    While the zonal grid system of Transonic Navier-Stokes (TNS) provides excellent modeling of complex geometries, improved shock capturing, and a higher Mach number range will be required if flows about hypersonic aircraft are to be modeled accurately. A computational fluid dynamics (CFD) code, the Compressible Navier-Stokes (CNS), is under development to combine the required high Mach number capability with the existing TNS geometry capability. One of several candidate flow solvers for inclusion in the CNS is that of F3D. This upwinding flow solver promises improved shock capturing, and more accurate hypersonic solutions overall, compared to the solver currently used in TNS.

  12. Modeling of a Surface Acoustic Wave Strain Sensor

    NASA Technical Reports Server (NTRS)

    Wilson, W. C.; Atkinson, Gary M.

    2010-01-01

    NASA Langley Research Center is investigating Surface Acoustic Wave (SAW) sensor technology for harsh environments aimed at aerospace applications. To aid in development of sensors a model of a SAW strain sensor has been developed. The new model extends the modified matrix method to include the response of Orthogonal Frequency Coded (OFC) reflectors and the response of SAW devices to strain. These results show that the model accurately captures the strain response of a SAW sensor on a Langasite substrate. The results of the model of a SAW Strain Sensor on Langasite are presented

  13. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  14. NPAC-Nozzle Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.

  15. On-line prediction of the glucose concentration of CHO cell cultivations by NIR and Raman spectroscopy: Comparative scalability test with a shake flask model system.

    PubMed

    Kozma, Bence; Hirsch, Edit; Gergely, Szilveszter; Párta, László; Pataki, Hajnalka; Salgó, András

    2017-10-25

    In this study, near-infrared (NIR) and Raman spectroscopy were compared in parallel to predict the glucose concentration of Chinese hamster ovary cell cultivations. A shake flask model system was used to quickly generate spectra similar to bioreactor cultivations therefore accelerating the development of a working model prior to actual cultivations. Automated variable selection and several pre-processing methods were tested iteratively during model development using spectra from six shake flask cultivations. The target was to achieve the lowest error of prediction for the glucose concentration in two independent shake flasks. The best model was then used to test the scalability of the two techniques by predicting spectra of a 10l and a 100l scale bioreactor cultivation. The NIR spectroscopy based model could follow the trend of the glucose concentration but it was not sufficiently accurate for bioreactor monitoring. On the other hand, the Raman spectroscopy based model predicted the concentration of glucose in both cultivation scales sufficiently accurately with an error around 4mM (0.72g/l), that is satisfactory for the on-line bioreactor monitoring purposes of the biopharma industry. Therefore, the shake flask model system was proven to be suitable for scalable spectroscopic model development. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Ultrasonic Motors (USM) - an emerging actuation technology for planetary applications

    NASA Technical Reports Server (NTRS)

    Bao, X.; Das, H.

    2000-01-01

    A hybrid model that addressed a complete ultrasonic motor as a system was developed. The model allows using powerful commercial FE package to express dynamic characteristics of the stator and the rotor in engineering practice. An analog model couples the finite element models for the stator and rotor for the stator-interface layer-rotor syste. The model provides reasonably accurate results for CAD.

  17. Wind Turbine Modeling Overview for Control Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, P. J.; Butterfield, S. B.

    2009-01-01

    Accurate modeling of wind turbine systems is of paramount importance for controls engineers seeking to reduce loads and optimize energy capture of operating turbines in the field. When designing control systems, engineers often employ a series of models developed in the different disciplines of wind energy. The limitations and coupling of each of these models is explained to highlight how these models might influence control system design.

  18. Evaluation of a post-fire tree mortality model for western US conifers

    Treesearch

    Sharon M. Hood; Charles W McHugh; Kevin C. Ryan; Elizabeth Reinhardt; Sheri L. Smith

    2007-01-01

    Accurately predicting fire-caused mortality is essential to developing prescribed fire burn plans and post-fire salvage marking guidelines. The mortality model included in the commonly used USA fire behaviour and effects models, the First Order Fire Effects Model (FOFEM), BehavePlus, and the Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS), has not...

  19. Integrating satellite imagery with simulation modeling to improve burn severity mapping

    Treesearch

    Eva C. Karau; Pamela G. Sikkink; Robert E. Keane; Gregory K. Dillon

    2014-01-01

    Both satellite imagery and spatial fire effects models are valuable tools for generating burn severity maps that are useful to fire scientists and resource managers. The purpose of this study was to test a new mapping approach that integrates imagery and modeling to create more accurate burn severity maps. We developed and assessed a statistical model that combines the...

  20. A simple, mass balance model of carbon flow in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Garland, Jay L.

    1989-01-01

    Internal cycling of chemical elements is a fundamental aspect of a Controlled Ecological Life Support System (CELSS). Mathematical models are useful tools for evaluating fluxes and reservoirs of elements associated with potential CELSS configurations. A simple mass balance model of carbon flow in CELSS was developed based on data from the CELSS Breadboard project at Kennedy Space Center. All carbon reservoirs and fluxes were calculated based on steady state conditions and modelled using linear, donor-controlled transfer coefficients. The linear expression of photosynthetic flux was replaced with Michaelis-Menten kinetics based on dynamical analysis of the model which found that the latter produced more adequate model output. Sensitivity analysis of the model indicated that accurate determination of the maximum rate of gross primary production is critical to the development of an accurate model of carbon flow. Atmospheric carbon dioxide was particularly sensitive to changes in photosynthetic rate. The small reservoir of CO2 relative to large CO2 fluxes increases the potential for volatility in CO2 concentration. Feedback control mechanisms regulating CO2 concentration will probably be necessary in a CELSS to reduce this system instability.

  1. Developing a Crustal and Upper Mantle Velocity Model for the Brazilian Northeast

    NASA Astrophysics Data System (ADS)

    Julia, J.; Nascimento, R.

    2013-05-01

    Development of 3D models for the earth's crust and upper mantle is important for accurately predicting travel times for regional phases and to improve seismic event location. The Brazilian Northeast is a tectonically active area within stable South America and displays one of the highest levels of seismicity in Brazil, with earthquake swarms containing events up to mb 5.2. Since 2011, seismic activity is routinely monitored through the Rede Sismográfica do Nordeste (RSisNE), a permanent network supported by the national oil company PETROBRAS and consisting of 15 broadband stations with an average spacing of ~200 km. Accurate event locations are required to correctly characterize and identify seismogenic areas in the region and assess seismic hazard. Yet, no 3D model of crustal thickness and crustal and upper mantle velocity variation exists. The first step in developing such models is to refine crustal thickness and depths to major seismic velocity boundaries in the crust and improve on seismic velocity estimates for the upper mantle and crustal layers. We present recent results in crustal and uppermost mantle structure in NE Brazil that will contribute to the development of a 3D model of velocity variation. Our approach has consisted of: (i) computing receiver functions to obtain point estimates of crustal thickness and Vp/Vs ratio and (ii) jointly inverting receiver functions and surface-wave dispersion velocities from an independent tomography study to obtain S-velocity profiles at each station. This approach has been used at all the broadband stations of the monitoring network plus 15 temporary, short-period stations that reduced the inter-station spacing to ~100 km. We expect our contributions will provide the basis to produce full 3D velocity models for the Brazilian Northeast and help determine accurate locations for seismic events in the region.

  2. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  3. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.

    PubMed

    Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

    2013-11-01

    The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.

  4. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  5. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less

  6. Tweaking the Four-Component Model

    ERIC Educational Resources Information Center

    Curzer, Howard J.

    2014-01-01

    By maintaining that moral functioning depends upon four components (sensitivity, judgment, motivation, and character), the Neo-Kohlbergian account of moral functioning allows for uneven moral development within individuals. However, I argue that the four-component model does not go far enough. I offer a more accurate account of moral functioning…

  7. COMPUTATIONAL TOXICOLOGY: AN IN SILLICO DOSIMETRY MODEL FOR THE ASSESSMENT OF AIR POLLUTANTS

    EPA Science Inventory

    To accurately assess the threat to human health presented by airborne contaminants, it is necessary to know the deposition patterns of particulate matter (PM) within the respiratory system. To provide a foundation for computational toxicology, we have developed an in silico model...

  8. Mark Stock | NREL

    Science.gov Websites

    , he started the Boston Virtual Reality Meetup group, develops physics plugins for games and demos for physically accurate lighting model, Second Conference on Computational Semiotics for Games and New Media

  9. 3D gut-liver chip with a PK model for prediction of first-pass metabolism.

    PubMed

    Lee, Dong Wook; Ha, Sang Keun; Choi, Inwook; Sung, Jong Hwan

    2017-11-07

    Accurate prediction of first-pass metabolism is essential for improving the time and cost efficiency of drug development process. Here, we have developed a microfluidic gut-liver co-culture chip that aims to reproduce the first-pass metabolism of oral drugs. This chip consists of two separate layers for gut (Caco-2) and liver (HepG2) cell lines, where cells can be co-cultured in both 2D and 3D forms. Both cell lines were maintained well in the chip, verified by confocal microscopy and measurement of hepatic enzyme activity. We investigated the PK profile of paracetamol in the chip, and corresponding PK model was constructed, which was used to predict PK profiles for different chip design parameters. Simulation results implied that a larger absorption surface area and a higher metabolic capacity are required to reproduce the in vivo PK profile of paracetamol more accurately. Our study suggests the possibility of reproducing the human PK profile on a chip, contributing to accurate prediction of pharmacological effect of drugs.

  10. Accurate atomistic potentials and training sets for boron-nitride nanostructures

    NASA Astrophysics Data System (ADS)

    Tamblyn, Isaac

    Boron nitride nanotubes exhibit exceptional structural, mechanical, and thermal properties. They are optically transparent and have high thermal stability, suggesting a wide range of opportunities for structural reinforcement of materials. Modeling can play an important role in determining the optimal approach to integrating nanotubes into a supporting matrix. Developing accurate, atomistic scale models of such nanoscale interfaces embedded within composites is challenging, however, due to the mismatch of length scales involved. Typical nanotube diameters range from 5-50 nm, with a length as large as a micron (i.e. a relevant length-scale for structural reinforcement). Unlike their carbon-based counterparts, well tested and transferable interatomic force fields are not common for BNNT. In light of this, we have developed an extensive training database of BN rich materials, under conditions relevant for BNNT synthesis and composites based on extensive first principles molecular dynamics simulations. Using this data, we have produced an artificial neural network potential capable of reproducing the accuracy of first principles data at significantly reduced computational cost, allowing for accurate simulation at the much larger length scales needed for composite design.

  11. Fuel Combustion and Engine Performance | Transportation Research | NREL

    Science.gov Websites

    . Through modeling, simulation, and experimental validation, researchers examine what happens to fuel inside combustion and engine research activities include: Developing experimental and simulation research platforms develop and refine accurate, efficient kinetic mechanisms for fuel ignition Investigating low-speed pre

  12. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  13. Population at risk: using areal interpolation and Twitter messages to create population models for burglaries and robberies

    PubMed Central

    2018-01-01

    ABSTRACT Population at risk of crime varies due to the characteristics of a population as well as the crime generator and attractor places where crime is located. This establishes different crime opportunities for different crimes. However, there are very few efforts of modeling structures that derive spatiotemporal population models to allow accurate assessment of population exposure to crime. This study develops population models to depict the spatial distribution of people who have a heightened crime risk for burglaries and robberies. The data used in the study include: Census data as source data for the existing population, Twitter geo-located data, and locations of schools as ancillary data to redistribute the source data more accurately in the space, and finally gridded population and crime data to evaluate the derived population models. To create the models, a density-weighted areal interpolation technique was used that disaggregates the source data in smaller spatial units considering the spatial distribution of the ancillary data. The models were evaluated with validation data that assess the interpolation error and spatial statistics that examine their relationship with the crime types. Our approach derived population models of a finer resolution that can assist in more precise spatial crime analyses and also provide accurate information about crime rates to the public. PMID:29887766

  14. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 U(sub j).

  15. Item Construction Using Reflective, Formative, or Rasch Measurement Models: Implications for Group Work

    ERIC Educational Resources Information Center

    Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew

    2017-01-01

    Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…

  16. Comparison of digital elevation models for aquatic data development.

    Treesearch

    Sharon Clarke; Kelly Burnett

    2003-01-01

    Thirty-meter digital elevation models (DEMs) produced by the U.S. Geological Survey (USGS) are widely available and commonly used in analyzing aquatic systems. However, these DEMs are of relatively coarse resolution, were inconsistently produced (i.e., Level 1 versus Level 2 DEMs), and lack drainage enforcement. Such issues may hamper efforts to accurately model...

  17. Finite difference elastic wave modeling with an irregular free surface using ADER scheme

    NASA Astrophysics Data System (ADS)

    Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.

    2015-06-01

    In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.

  18. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  19. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  20. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  1. POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Ştefănescu, R.; Sandu, A.; Navon, I. M.

    2015-08-01

    This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.

  2. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  3. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  4. Advances in Scientific Balloon Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Bohaboj, T.; Cathey, H. M., Jr.

    2004-01-01

    The National Aeronautics and Space Administration's Balloon Program office has long acknowledged that the accurate modeling of balloon performance and flight prediction is dependant on how well the balloon is thermally modeled. This ongoing effort is focused on developing accurate balloon thermal models that can be used to quickly predict balloon temperatures and balloon performance. The ability to model parametric changes is also a driver for this effort. This paper will present the most recent advances made in this area. This research effort continues to utilize the "Thrmal Desktop" addition to AUTO CAD for the modeling. Recent advances have been made by using this analytical tool. A number of analyses have been completed to test the applicability of this tool to the problem with very positive results. Progressively detailed models have been developed to explore the capabilities of the tool as well as to provide guidance in model formulation. A number of parametric studies have been completed. These studies have varied the shape of the structure, material properties, environmental inputs, and model geometry. These studies have concentrated on spherical "proxy models" for the initial development stages and then to transition to the natural shaped zero pressure and super pressure balloons. An assessment of required model resolution has also been determined. Model solutions have been cross checked with known solutions via hand calculations. The comparison of these cases will also be presented. One goal is to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. This papa presents the step by step advances made as part of this effort, capabilities, limitations, and the lessons learned. Also presented are the plans for further thermal modeling work.

  5. Development and validation of a stochastic model for potential growth of Listeria monocytogenes in naturally contaminated lightly preserved seafood.

    PubMed

    Mejlholm, Ole; Bøknæs, Niels; Dalgaard, Paw

    2015-02-01

    A new stochastic model for the simultaneous growth of Listeria monocytogenes and lactic acid bacteria (LAB) was developed and validated on data from naturally contaminated samples of cold-smoked Greenland halibut (CSGH) and cold-smoked salmon (CSS). During industrial processing these samples were added acetic and/or lactic acids. The stochastic model was developed from an existing deterministic model including the effect of 12 environmental parameters and microbial interaction (O. Mejlholm and P. Dalgaard, Food Microbiology, submitted for publication). Observed maximum population density (MPD) values of L. monocytogenes in naturally contaminated samples of CSGH and CSS were accurately predicted by the stochastic model based on measured variability in product characteristics and storage conditions. Results comparable to those from the stochastic model were obtained, when product characteristics of the least and most preserved sample of CSGH and CSS were used as input for the existing deterministic model. For both modelling approaches, it was shown that lag time and the effect of microbial interaction needs to be included to accurately predict MPD values of L. monocytogenes. Addition of organic acids to CSGH and CSS was confirmed as a suitable mitigation strategy against the risk of growth by L. monocytogenes as both types of products were in compliance with the EU regulation on ready-to-eat foods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Thermal infrared data of active lava surfaces using a newly-developed camera system

    NASA Astrophysics Data System (ADS)

    Thompson, J. O.; Ramsey, M. S.

    2017-12-01

    Our ability to acquire accurate data during lava flow emplacement greatly improves models designed to predict their dynamics and down-flow hazard potential. For example, better constraint on the physical property of emissivity as a lava cools improves the accuracy of the derived temperature, a critical parameter for flow models that estimate at-vent eruption rate, flow length, and distribution. Thermal infrared (TIR) data are increasingly used as a tool to determine eruption styles and cooling regimes by measuring temperatures at high temporal resolutions. Factors that control the accurate measurement of surface temperatures include both material properties (e.g., emissivity and surface texture) as well as external factors (e.g., camera geometry and the intervening atmosphere). We present a newly-developed, field-portable miniature multispectral thermal infrared camera (MMT-Cam) to measure both temperature and emissivity of basaltic lava surfaces at up to 7 Hz. The MMT-Cam acquires emitted radiance in six wavelength channels in addition to the broadband temperature. The instrument was laboratory calibrated for systematic errors and fully field tested at the Overlook Crater lava lake (Kilauea, HI) in January 2017. The data show that the major emissivity absorption feature (around 8.5 to 9.0 µm) transitions to higher wavelengths and the depth of the feature decreases as a lava surface cools, forming a progressively thicker crust. This transition occurs over a temperature range of 758 to 518 K. Constraining the relationship between this spectral change and temperature derived from this data will provide more accurate temperatures and therefore, more accurate modeling results. This is the first time that emissivity and its link to temperature has been measured in situ on active lava surfaces, which will improve input parameters of flow propagation models and possibly improve flow forecasting.

  7. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 1: Modeling with Method of Characteristics.

    PubMed

    Porru, Marcella; Özkan, Leyla

    2017-05-24

    This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators.

  8. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 1: Modeling with Method of Characteristics

    PubMed Central

    2017-01-01

    This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators. PMID:28603342

  9. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    PubMed

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  10. End-of-Discharge and End-of-Life Prediction in Lithium-Ion Batteries with Electrochemistry-Based Aging Models

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Kulkarni, Chetan S.

    2016-01-01

    As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.

  11. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    DOE PAGES

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-20

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less

  12. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  13. Semiautomated model building for RNA crystallography using a directed rotameric approach.

    PubMed

    Keating, Kevin S; Pyle, Anna Marie

    2010-05-04

    Structured RNA molecules play essential roles in a variety of cellular processes; however, crystallographic studies of such RNA molecules present a large number of challenges. One notable complication arises from the low resolutions typical of RNA crystallography, which results in electron density maps that are imprecise and difficult to interpret. This problem is exacerbated by the lack of computational tools for RNA modeling, as many of the techniques commonly used in protein crystallography have no equivalents for RNA structure. This leads to difficulty and errors in the model building process, particularly in modeling of the RNA backbone, which is highly error prone due to the large number of variable torsion angles per nucleotide. To address this, we have developed a method for accurately building the RNA backbone into maps of intermediate or low resolution. This method is semiautomated, as it requires a crystallographer to first locate phosphates and bases in the electron density map. After this initial trace of the molecule, however, an accurate backbone structure can be built without further user intervention. To accomplish this, backbone conformers are first predicted using RNA pseudotorsions and the base-phosphate perpendicular distance. Detailed backbone coordinates are then calculated to conform both to the predicted conformer and to the previously located phosphates and bases. This technique is shown to produce accurate backbone structure even when starting from imprecise phosphate and base coordinates. A program implementing this methodology is currently available, and a plugin for the Coot model building program is under development.

  14. Thinking Like a Human

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Accurate Automation Corporation (AAC) of Chattanooga, TN, developed a neural network processor (NNP) for use onboard the NASA- and Air Force-sponsored LoFLYTE aircraft. The processor is modeled after connections in the brain.

  15. Accurate and self-consistent procedure for determining pH in seawater desalination brines and its manifestation in reverse osmosis modeling.

    PubMed

    Nir, Oded; Marvin, Esra; Lahav, Ori

    2014-11-01

    Measuring and modeling pH in concentrated aqueous solutions in an accurate and consistent manner is of paramount importance to many R&D and industrial applications, including RO desalination. Nevertheless, unified definitions and standard procedures have yet to be developed for solutions with ionic strength higher than ∼0.7 M, while implementation of conventional pH determination approaches may lead to significant errors. In this work a systematic yet simple methodology for measuring pH in concentrated solutions (dominated by Na(+)/Cl(-)) was developed and evaluated, with the aim of achieving consistency with the Pitzer ion-interaction approach. Results indicate that the addition of 0.75 M of NaCl to NIST buffers, followed by assigning a new standard pH (calculated based on the Pitzer approach), enabled reducing measured errors to below 0.03 pH units in seawater RO brines (ionic strength up to 2 M). To facilitate its use, the method was developed to be both conceptually and practically analogous to the conventional pH measurement procedure. The method was used to measure the pH of seawater RO retentates obtained at varying recovery ratios. The results matched better the pH values predicted by an accurate RO transport model. Calibrating the model by the measured pH values enabled better boron transport prediction. A Donnan-induced phenomenon, affecting pH in both retentate and permeate streams, was identified and quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Effects of soil moisture on the diurnal pattern of pesticide emission: Numerical simulation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    Accurate prediction of pesticide volatilization is important for the protection of human and environmental health. Due to the complexity of the volatilization process, sophisticated predictive models are needed, especially for dry soil conditions. A mathematical model was developed to allow simulati...

  17. Development of cropland management dataset to support U.S. SWAT assessments

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool (SWAT) is a widely used hydrologic/water quality simulation model in the U.S. Process-based models like SWAT require a great deal of data to accurately represent the natural world, including topography, landuse, soils, weather, and management. With the exception ...

  18. Methods for estimating population density in data-limited areas: evaluating regression and tree-based models in Peru.

    PubMed

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.

  19. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  20. Model improvements to simulate charging in SEM

    NASA Astrophysics Data System (ADS)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  1. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  2. Methods for Estimating Population Density in Data-Limited Areas: Evaluating Regression and Tree-Based Models in Peru

    PubMed Central

    Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William

    2014-01-01

    Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657

  3. New Equation of State Models for Hydrodynamic Applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.

    1997-07-01

    Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.

  4. Development of a relationship between external measurements and reinforcement stress

    NASA Astrophysics Data System (ADS)

    Brault, Andre; Hoult, Neil A.; Lees, Janet M.

    2015-03-01

    As many countries around the world face an aging infrastructure crisis, there is an increasing need to develop more accurate monitoring and assessment techniques for reinforced concrete structures. One of the challenges associated with assessing existing infrastructure is correlating externally measured parameters such as crack widths and surface strains with reinforcement stresses as this is dependent on a number of variables. The current research investigates how the use of distributed fiber optic sensors to measure reinforcement strain can be correlated with digital image correlation measurements of crack widths to relate external crack width measurements to reinforcement stresses. An initial set of experiments was undertaken involving a series of small-scale beam specimens tested in three-point bending with variable reinforcement properties. Relationships between crack widths and internal reinforcement strains were observed including that both the diameter and number of bars affected the measured maximum strain and crack width. A model that uses measured crack width to estimate reinforcement strain was presented and compared to the experimental results. The model was found to provide accurate estimates of load carrying capacity for a given crack width, however, the model was potentially less accurate when crack widths were used to estimate the experimental reinforcement strains. The need for more experimental data to validate the conclusions of this research was also highlighted.

  5. Computer Simulation of Microwave Devices

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1997-01-01

    The accurate simulation of cold-test results including dispersion, on-axis beam interaction impedance, and attenuation of a helix traveling-wave tube (TWT) slow-wave circuit using the three-dimensional code MAFIA (Maxwell's Equations Solved by the Finite Integration Algorithm) was demonstrated for the first time. Obtaining these results is a critical step in the design of TWT's. A well-established procedure to acquire these parameters is to actually build and test a model or a scale model of the circuit. However, this procedure is time-consuming and expensive, and it limits freedom to examine new variations to the basic circuit. These limitations make the need for computational methods crucial since they can lower costs, reduce tube development time, and lessen limitations on novel designs. Computer simulation has been used to accurately obtain cold-test parameters for several slow-wave circuits. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. A new computer modeling technique developed at the NASA Lewis Research Center overcomes these difficulties. The MAFIA three-dimensional mesh for a C-band helix slow-wave circuit is shown.

  6. AN INVERSE MODELING APPROACH FOR STRESS ESTIMATION IN MITRAL VALVE ANTERIOR LEAFLET VALVULOPLASTY FOR IN-VIVO VALVULAR BIOMATERIAL ASSESSMENT

    PubMed Central

    Lee, Chung-Hao; Amini, Rouzbeh; Gorman, Robert C.; Gorman, Joseph H.; Sacks, Michael S.

    2013-01-01

    Estimation of regional tissue stresses in the functioning heart valve remains an important goal in our understanding of normal valve function and in developing novel engineered tissue strategies for valvular repair and replacement. Methods to accurately estimate regional tissue stresses are thus needed for this purpose, and in particular to develop accurate, statistically informed means to validate computational models of valve function. Moreover, there exists no currently accepted method to evaluate engineered heart valve tissues and replacement heart valve biomaterials undergoing valvular stresses in blood contact. While we have utilized mitral valve anterior leaflet valvuloplasty as an experimental approach to address this limitation, robust computational techniques to estimate implant stresses are required. In the present study, we developed a novel numerical analysis approach for estimation of the in-vivo stresses of the central region of the mitral valve anterior leaflet (MVAL) delimited by a sonocrystal transducer array. The in-vivo material properties of the MVAL were simulated using an inverse FE modeling approach based on three pseudo-hyperelastic constitutive models: the neo-Hookean, exponential-type isotropic, and full collagen-fiber mapped transversely isotropic models. A series of numerical replications with varying structural configurations were developed by incorporating measured statistical variations in MVAL local preferred fiber directions and fiber splay. These model replications were then used to investigate how known variations in the valve tissue microstructure influence the estimated ROI stresses and its variation at each time point during a cardiac cycle. Simulations were also able to include estimates of the variation in tissue stresses for an individual specimen dataset over the cardiac cycle. Of the three material models, the transversely anisotropic model produced the most accurate results, with ROI averaged stresses at the fully-loaded state of 432.6±46.5 kPa and 241.4±40.5 kPa in the radial and circumferential directions, respectively. We conclude that the present approach can provide robust instantaneous mean and variation estimates of tissue stresses of the central regions of the MVAL. PMID:24275434

  7. A model of solar energetic particles for use in calculating LET spectra developed from ONR-604 data

    NASA Technical Reports Server (NTRS)

    Chen, J.; Chenette, D.; Guzik, T. G.; Garcia-Munoz, M.; Pyle, K. R.; Sang, Y.; Wefel, J. P.

    1994-01-01

    A model of solar energetic particles (SEP) has been developed and is applied to solar flares during the 1990/1991 CRRES mission using data measured by the University of Chicago instrument, ONR-604. The model includes the time-dependent behavior, heavy-ion content, energy spectrum and fluence, and can accurately represent the observed SEP events in the energy range between 40 to 500 MeV/nucleon. Results are presented for the March and June, 1991 flare periods.

  8. Analytical prediction of digital signal crosstalk of FCC

    NASA Technical Reports Server (NTRS)

    Belleisle, A. P.

    1972-01-01

    The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.

  9. Modeling of Pedestrian Flows Using Hybrid Models of Euler Equations and Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Bärwolff, Günter; Slawig, Thomas; Schwandt, Hartmut

    2007-09-01

    In the last years various systems have been developed for controlling, planning and predicting the traffic of persons and vehicles, in particular under security aspects. Going beyond pure counting and statistical models, approaches were found to be very adequate and accurate which are based on well-known concepts originally developed in very different research areas, namely continuum mechanics and computer science. In the present paper, we outline a continuum mechanical approach for the description of pedestrain flow.

  10. Forecasting of municipal solid waste quantity in a developing country using multivariate grey models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Intharathirat, Rotchana, E-mail: rotchana.in@gmail.com; Abdul Salam, P., E-mail: salam@ait.ac.th; Kumar, S., E-mail: kumar@ait.ac.th

    Highlights: • Grey model can be used to forecast MSW quantity accurately with the limited data. • Prediction interval overcomes the uncertainty of MSW forecast effectively. • A multivariate model gives accuracy associated with factors affecting MSW quantity. • Population, urbanization, employment and household size play role for MSW quantity. - Abstract: In order to plan, manage and use municipal solid waste (MSW) in a sustainable way, accurate forecasting of MSW generation and composition plays a key role. It is difficult to carry out the reliable estimates using the existing models due to the limited data available in the developingmore » countries. This study aims to forecast MSW collected in Thailand with prediction interval in long term period by using the optimized multivariate grey model which is the mathematical approach. For multivariate models, the representative factors of residential and commercial sectors affecting waste collected are identified, classified and quantified based on statistics and mathematics of grey system theory. Results show that GMC (1, 5), the grey model with convolution integral, is the most accurate with the least error of 1.16% MAPE. MSW collected would increase 1.40% per year from 43,435–44,994 tonnes per day in 2013 to 55,177–56,735 tonnes per day in 2030. This model also illustrates that population density is the most important factor affecting MSW collected, followed by urbanization, proportion employment and household size, respectively. These mean that the representative factors of commercial sector may affect more MSW collected than that of residential sector. Results can help decision makers to develop the measures and policies of waste management in long term period.« less

  11. Automated MRI segmentation for individualized modeling of current flow in the human head.

    PubMed

    Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C

    2013-12-01

    High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.

  12. Electrochemical carbon dioxide concentrator: Math model

    NASA Technical Reports Server (NTRS)

    Marshall, R. D.; Schubert, F. H.; Carlson, J. N.

    1973-01-01

    A steady state computer simulation model of an Electrochemical Depolarized Carbon Dioxide Concentrator (EDC) has been developed. The mathematical model combines EDC heat and mass balance equations with empirical correlations derived from experimental data to describe EDC performance as a function of the operating parameters involved. The model is capable of accurately predicting performance over EDC operating ranges. Model simulation results agree with the experimental data obtained over the prediction range.

  13. Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.

    PubMed

    Neyra, Javier A; Leaf, David E

    2018-05-31

    Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.

  14. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  15. Electrochemical carbon dioxide concentrator subsystem math model. [for manned space station

    NASA Technical Reports Server (NTRS)

    Marshall, R. D.; Carlson, J. N.; Schubert, F. H.

    1974-01-01

    A steady state computer simulation model has been developed to describe the performance of a total six man, self-contained electrochemical carbon dioxide concentrator subsystem built for the space station prototype. The math model combines expressions describing the performance of the electrochemical depolarized carbon dioxide concentrator cells and modules previously developed with expressions describing the performance of the other major CS-6 components. The model is capable of accurately predicting CS-6 performance over EDC operating ranges and the computer simulation results agree with experimental data obtained over the prediction range.

  16. Dynamics Simulation Model for Space Tethers

    NASA Technical Reports Server (NTRS)

    Levin, E. M.; Pearson, J.; Oldson, J. C.

    2006-01-01

    This document describes the development of an accurate model for the dynamics of the Momentum Exchange Electrodynamic Reboost (MXER) system. The MXER is a rotating tether about 100-km long in elliptical Earth orbit designed to catch payloads in low Earth orbit and throw them to geosynchronous orbit or to Earth escape. To ensure successful rendezvous between the MXER tip catcher and a payload, a high-fidelity model of the system dynamics is required. The model developed here quantifies the major environmental perturbations, and can predict the MXER tip position to within meters over one orbit.

  17. Modeling and control design of a wind tunnel model support

    NASA Technical Reports Server (NTRS)

    Howe, David A.

    1990-01-01

    The 12-Foot Pressure Wind Tunnel at Ames Research Center is being restored. A major part of the restoration is the complete redesign of the aircraft model supports and their associated control systems. An accurate trajectory control servo system capable of positioning a model (with no measurable overshoot) is needed. Extremely small errors in scaled-model pitch angle can increase airline fuel costs for the final aircraft configuration by millions of dollars. In order to make a mechanism sufficiently accurate in pitch, a detailed structural and control-system model must be created and then simulated on a digital computer. The model must contain linear representations of the mechanical system, including masses, springs, and damping in order to determine system modes. Electrical components, both analog and digital, linear and nonlinear must also be simulated. The model of the entire closed-loop system must then be tuned to control the modes of the flexible model-support structure. The development of a system model, the control modal analysis, and the control-system design are discussed.

  18. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  19. Millimeter wave satellite communication studies. Results of the 1981 propagation modeling effort

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Tsolakis, A.; Dishman, W. K.

    1982-01-01

    Theoretical modeling associated with rain effects on millimeter wave propagation is detailed. Three areas of work are discussed. A simple model for prediction of rain attenuation is developed and evaluated. A method for computing scattering from single rain drops is presented. A complete multiple scattering model is described which permits accurate calculation of the effects on dual polarized signals passing through rain.

  20. Computational Modeling System for Deformation and Failure in Polycrystalline Metals

    DTIC Science & Technology

    2009-03-29

    FIB/EHSD 3.3 The Voronoi Cell FEM for Micromechanical Modeling 3.4 VCFEM for Microstructural Damage Modeling 3.5 Adaptive Multiscale Simulations...accurate and efficient image-based micromechanical finite element model, for crystal plasticity and damage , incorporating real morphological and...topology with evolving strain localization and damage . (v) Development of multi-scaling algorithms in the time domain for compression and localization in

  1. Estimating the spatial distribution of wintering little brown bat populations in the eastern United States

    USGS Publications Warehouse

    Russell, Robin E.; Tinsley, Karl; Erickson, Richard A.; Thogmartin, Wayne E.; Jennifer A. Szymanski,

    2014-01-01

    Depicting the spatial distribution of wildlife species is an important first step in developing management and conservation programs for particular species. Accurate representation of a species distribution is important for predicting the effects of climate change, land-use change, management activities, disease, and other landscape-level processes on wildlife populations. We developed models to estimate the spatial distribution of little brown bat (Myotis lucifugus) wintering populations in the United States east of the 100th meridian, based on known hibernacula locations. From this data, we developed several scenarios of wintering population counts per county that incorporated uncertainty in the spatial distribution of the hibernacula as well as uncertainty in the size of the current little brown bat population. We assessed the variability in our results resulting from effects of uncertainty. Despite considerable uncertainty in the known locations of overwintering little brown bats in the eastern United States, we believe that models accurately depicting the effects of the uncertainty are useful for making management decisions as these models are a coherent organization of the best available information.

  2. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantum mechanical study of solvent effects in a prototype S{sub N}2 reaction in solution: Cl{sup −} attack on CH{sub 3}Cl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuechler, Erich R.; Department of Chemistry, University of Minnesota, Minneapolis, Minnesota 55455-0431; York, Darrin M., E-mail: york@biomaps.rutgers.edu

    2014-02-07

    The nucleophilic attack of a chloride ion on methyl chloride is an important prototype S{sub N}2 reaction in organic chemistry that is known to be sensitive to the effects of the surrounding solvent. Herein, we develop a highly accurate Specific Reaction Parameter (SRP) model based on the Austin Model 1 Hamiltonian for chlorine to study the effects of solvation into an aqueous environment on the reaction mechanism. To accomplish this task, we apply high-level quantum mechanical calculations to study the reaction in the gas phase and combined quantum mechanical/molecular mechanical simulations with TIP3P and TIP4P-ew water models and the resultingmore » free energy profiles are compared with those determined from simulations using other fast semi-empirical quantum models. Both gas phase and solution results with the SRP model agree very well with experiment and provide insight into the specific role of solvent on the reaction coordinate. Overall, the newly parameterized SRP Hamiltonian is able to reproduce both the gas phase and solution phase barriers, suggesting it is an accurate and robust model for simulations in the aqueous phase at greatly reduced computational cost relative to comparably accurate ab initio and density functional models.« less

  4. Quantum mechanical study of solvent effects in a prototype SN2 reaction in solution: Cl- attack on CH3Cl

    NASA Astrophysics Data System (ADS)

    Kuechler, Erich R.; York, Darrin M.

    2014-02-01

    The nucleophilic attack of a chloride ion on methyl chloride is an important prototype SN2 reaction in organic chemistry that is known to be sensitive to the effects of the surrounding solvent. Herein, we develop a highly accurate Specific Reaction Parameter (SRP) model based on the Austin Model 1 Hamiltonian for chlorine to study the effects of solvation into an aqueous environment on the reaction mechanism. To accomplish this task, we apply high-level quantum mechanical calculations to study the reaction in the gas phase and combined quantum mechanical/molecular mechanical simulations with TIP3P and TIP4P-ew water models and the resulting free energy profiles are compared with those determined from simulations using other fast semi-empirical quantum models. Both gas phase and solution results with the SRP model agree very well with experiment and provide insight into the specific role of solvent on the reaction coordinate. Overall, the newly parameterized SRP Hamiltonian is able to reproduce both the gas phase and solution phase barriers, suggesting it is an accurate and robust model for simulations in the aqueous phase at greatly reduced computational cost relative to comparably accurate ab initio and density functional models.

  5. Quantum mechanical study of solvent effects in a prototype SN2 reaction in solution: Cl- attack on CH3Cl.

    PubMed

    Kuechler, Erich R; York, Darrin M

    2014-02-07

    The nucleophilic attack of a chloride ion on methyl chloride is an important prototype SN2 reaction in organic chemistry that is known to be sensitive to the effects of the surrounding solvent. Herein, we develop a highly accurate Specific Reaction Parameter (SRP) model based on the Austin Model 1 Hamiltonian for chlorine to study the effects of solvation into an aqueous environment on the reaction mechanism. To accomplish this task, we apply high-level quantum mechanical calculations to study the reaction in the gas phase and combined quantum mechanical/molecular mechanical simulations with TIP3P and TIP4P-ew water models and the resulting free energy profiles are compared with those determined from simulations using other fast semi-empirical quantum models. Both gas phase and solution results with the SRP model agree very well with experiment and provide insight into the specific role of solvent on the reaction coordinate. Overall, the newly parameterized SRP Hamiltonian is able to reproduce both the gas phase and solution phase barriers, suggesting it is an accurate and robust model for simulations in the aqueous phase at greatly reduced computational cost relative to comparably accurate ab initio and density functional models.

  6. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  7. Leaf optical system modeled as a stochastic process. [solar radiation interaction with terrestrial vegetation

    NASA Technical Reports Server (NTRS)

    Tucker, C. J.; Garratt, M. W.

    1977-01-01

    A stochastic leaf radiation model based upon physical and physiological properties of dicot leaves has been developed. The model accurately predicts the absorbed, reflected, and transmitted radiation of normal incidence as a function of wavelength resulting from the leaf-irradiance interaction over the spectral interval of 0.40-2.50 micron. The leaf optical system has been represented as Markov process with a unique transition matrix at each 0.01-micron increment between 0.40 micron and 2.50 micron. Probabilities are calculated at every wavelength interval from leaf thickness, structure, pigment composition, and water content. Simulation results indicate that this approach gives accurate estimations of actual measured values for dicot leaf absorption, reflection, and transmission as a function of wavelength.

  8. Investigation into metastatic processes and the therapeutic effects of gemcitabine on human pancreatic cancer using an orthotopic SUIT-2 pancreatic cancer mouse model

    PubMed Central

    Higuchi, Tamami; Yokobori, Takehiko; Naito, Tomoharu; Kakinuma, Chihaya; Hagiwara, Shinji; Nishiyama, Masahiko; Asao, Takayuki

    2018-01-01

    Prognosis of pancreatic cancer is poor, thus the development of novel therapeutic drugs is necessary. During preclinical studies, appropriate models are essential for evaluating drug efficacy. The present study sought to determine the ideal pancreatic cancer mouse model for reliable preclinical testing. Such a model could accurately reflect human pancreatic cancer phenotypes and predict future clinical trial results. Systemic pathology analysis was performed in an orthotopic transplantation model to prepare model mice for use in preclinical studies, mimicking the progress of human pancreatic cancer. The location and the timing of inoculated cancer cell metastases, pathogenesis and cause of fatality were analyzed. Furthermore, the efficacy of gemcitabine, a key pancreatic cancer drug, was evaluated in this model where liver metastasis and peritoneal dissemination occur. Results indicated that the SUIT-2 orthotopic pancreatic cancer model was similar to the phenotypic sequential progression of human pancreatic cancer, with extra-pancreatic invasion, intra-peritoneal dissemination and other hematogenous organ metastases. Notably, survival was prolonged by administering gemcitabine to mice with metastasized pancreatic cancer. Furthermore, the detailed effects of gemcitabine on the primary tumor and metastatic tumor lesions were pathologically evaluated in mice. The present study indicated the model accurately depicted pancreatic cancer development and metastasis. Furthermore, the detailed effects of pancreatic cancer drugs on the primary tumor and on metastatic tumor lesions. We present this model as a potential new standard for new drug development in pancreatic cancer. PMID:29435042

  9. A prediction model for early death in non-small cell lung cancer patients following curative-intent chemoradiotherapy.

    PubMed

    Jochems, Arthur; El-Naqa, Issam; Kessler, Marc; Mayo, Charles S; Jolly, Shruti; Matuszak, Martha; Faivre-Finn, Corinne; Price, Gareth; Holloway, Lois; Vinod, Shalini; Field, Matthew; Barakat, Mohamed Samir; Thwaites, David; de Ruysscher, Dirk; Dekker, Andre; Lambin, Philippe

    2018-02-01

    Early death after a treatment can be seen as a therapeutic failure. Accurate prediction of patients at risk for early mortality is crucial to avoid unnecessary harm and reducing costs. The goal of our work is two-fold: first, to evaluate the performance of a previously published model for early death in our cohorts. Second, to develop a prognostic model for early death prediction following radiotherapy. Patients with NSCLC treated with chemoradiotherapy or radiotherapy alone were included in this study. Four different cohorts from different countries were available for this work (N = 1540). The previous model used age, gender, performance status, tumor stage, income deprivation, no previous treatment given (yes/no) and body mass index to make predictions. A random forest model was developed by learning on the Maastro cohort (N = 698). The new model used performance status, age, gender, T and N stage, total tumor volume (cc), total tumor dose (Gy) and chemotherapy timing (none, sequential, concurrent) to make predictions. Death within 4 months of receiving the first radiotherapy fraction was used as the outcome. Early death rates ranged from 6 to 11% within the four cohorts. The previous model performed with AUC values ranging from 0.54 to 0.64 on the validation cohorts. Our newly developed model had improved AUC values ranging from 0.62 to 0.71 on the validation cohorts. Using advanced machine learning methods and informative variables, prognostic models for early mortality can be developed. Development of accurate prognostic tools for early mortality is important to inform patients about treatment options and optimize care.

  10. Developing a tuberculosis transmission model that accounts for changes in population health.

    PubMed

    Oxlade, Olivia; Schwartzman, Kevin; Benedetti, Andrea; Pai, Madhukar; Heymann, Jody; Menzies, Dick

    2011-01-01

    Simulation models are useful in policy planning for tuberculosis (TB) control. To accurately assess interventions, important modifiers of the epidemic should be accounted for in evaluative models. Improvements in population health were associated with the declining TB epidemic in the pre-antibiotic era and may be relevant today. The objective of this study was to develop and validate a TB transmission model that accounted for changes in population health. We developed a deterministic TB transmission model, using reported data from the pre-antibiotic era in England. Change in adjusted life expectancy, used as a proxy for general health, was used to determine the rate of change of key epidemiological parameters. Predicted outcomes included risk of TB infection and TB mortality. The model was validated in the setting of the Netherlands and then applied to modern Peru. The model, developed in the setting of England, predicted TB trends in the Netherlands very accurately. The R(2) value for correlation between observed and predicted data was 0.97 and 0.95 for TB infection and mortality, respectively. In Peru, the predicted decline in incidence prior to the expansion of "Directly Observed Treatment Short Course" (The DOTS strategy) was 3.7% per year (observed = 3.9% per year). After DOTS expansion, the predicted decline was very similar to the observed decline of 5.8% per year. We successfully developed and validated a TB model, which uses a proxy for population health to estimate changes in key epidemiology parameters. Population health contributed significantly to improvement in TB outcomes observed in Peru. Changing population health should be incorporated into evaluative models for global TB control.

  11. Steady-state and dynamic models for particle engulfment during solidification

    NASA Astrophysics Data System (ADS)

    Tao, Yutao; Yeckel, Andrew; Derby, Jeffrey J.

    2016-06-01

    Steady-state and dynamic models are developed to study the physical mechanisms that determine the pushing or engulfment of a solid particle at a moving solid-liquid interface. The mathematical model formulation rigorously accounts for energy and momentum conservation, while faithfully representing the interfacial phenomena affecting solidification phase change and particle motion. A numerical solution approach is developed using the Galerkin finite element method and elliptic mesh generation in an arbitrary Lagrangian-Eulerian implementation, thus allowing for a rigorous representation of forces and dynamics previously inaccessible by approaches using analytical approximations. We demonstrate that this model accurately computes the solidification interface shape while simultaneously resolving thin fluid layers around the particle that arise from premelting during particle engulfment. We reinterpret the significance of premelting via the definition an unambiguous critical velocity for engulfment from steady-state analysis and bifurcation theory. We also explore the complicated transient behaviors that underlie the steady states of this system and posit the significance of dynamical behavior on engulfment events for many systems. We critically examine the onset of engulfment by comparing our computational predictions to those obtained using the analytical model of Rempel and Worster [29]. We assert that, while the accurate calculation of van der Waals repulsive forces remains an open issue, the computational model developed here provides a clear benefit over prior models for computing particle drag forces and other phenomena needed for the faithful simulation of particle engulfment.

  12. Prediction of porosity of food materials during drying: Current challenges and directions.

    PubMed

    Joardder, Mohammad U H; Kumar, C; Karim, M A

    2017-07-18

    Pore formation in food samples is a common physical phenomenon observed during dehydration processes. The pore evolution during drying significantly affects the physical properties and quality of dried foods. Therefore, it should be taken into consideration when predicting transport processes in the drying sample. Characteristics of pore formation depend on the drying process parameters, product properties and processing time. Understanding the physics of pore formation and evolution during drying will assist in accurately predicting the drying kinetics and quality of food materials. Researchers have been trying to develop mathematical models to describe the pore formation and evolution during drying. In this study, existing porosity models are critically analysed and limitations are identified. Better insight into the factors affecting porosity is provided, and suggestions are proposed to overcome the limitations. These include considerations of process parameters such as glass transition temperature, sample temperature, and variable material properties in the porosity models. Several researchers have proposed models for porosity prediction of food materials during drying. However, these models are either very simplistic or empirical in nature and failed to consider relevant significant factors that influence porosity. In-depth understanding of characteristics of the pore is required for developing a generic model of porosity. A micro-level analysis of pore formation is presented for better understanding, which will help in developing an accurate and generic porosity model.

  13. Tracking unaccounted water use in data sparse arid environment

    NASA Astrophysics Data System (ADS)

    Hafeez, M. M.; Edraki, M.; Ullah, M. K.; Chemin, Y.; Sixsmith, J.; Faux, R.

    2009-12-01

    Hydrological knowledge of irrigated farms within the inundation plains of the Murray Darling Basin (MDB) is very limited in quality and reliability of the observation network that has been declining rapidly over the past decade. This paper focuses on Land Surface Diversions (LSD) that encompass all forms of surface water diversion except the direct extraction of water from rivers, watercourses and lakes by farmers for the purposes of irrigation and stock and domestic supply. Its accurate measurement is very challenging, due to the practical difficulties associated with separating the different components of LSD and estimating them accurately for a large catchment. The inadequacy of current methods of measuring and monitoring LSD poses severe limitations on existing and proposed policies for managing such diversions. It is commonly believed that LSD comprises 20-30% of total diversions from river valleys in the MDB areas. But, scientific estimates of LSD do not exist, because they were considered unimportant prior the onset of recent draught in Australia. There is a need to develop hydrological water balance models through the coupling of hydrological variables derived from on ground hydrological measurements and remote sensing techniques to accurately model LSD. Typically, the hydrological water balance components for farm/catchment scale models includes: irrigation inflow, outflow, rainfall, runoff, evapotranspiration, soil moisture change and deep percolation. The actual evapotranspiration (ETa) is the largest and single most important component of hydrological water balance model. An accurate quantification of all components of hydrological water balance model at farm/catchment scale is of prime importance to estimate the volume of LSD. A hydrological water balance model is developed to calculate LSD at 6 selected pilot farms. The catchment hydrological water balance model is being developed by using selected parameters derived from hydrological water balance model at farm scale. LSD results obtained through the modelling process have been compared with LSD estimates measured with the ground observed data at 6 pilot farms. The differences between the values are between 3 to 5 percent of the water inputs which is within the confidence limit expected from such analysis. Similarly, the LSD values at the catchment scale have been estimated with a great confidence. The hydrological water balance models at farm and catchment scale provide reliable quantification of LSD. Improved LSD estimates can guide water management decisions at farm to catchment scale and could be instrumental for enhancing the integrity of the water allocation process and making them fairer and equitable across stakeholders.

  14. A Simple Plasma Retinol Isotope Ratio Method for Estimating β-Carotene Relative Bioefficacy in Humans: Validation with the Use of Model-Based Compartmental Analysis.

    PubMed

    Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H

    2017-09-01

    Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.

  15. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  16. Development and evaluation of height diameter at breast models for native Chinese Metasequoia.

    PubMed

    Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.

  17. Development and evaluation of height diameter at breast models for native Chinese Metasequoia

    PubMed Central

    Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600

  18. Project M: Scale Model of Lunar Landing Site of Apollo 17: Focus on Lighting Conditions and Analysis

    NASA Technical Reports Server (NTRS)

    Vanik, Christopher S.; Crain, Timothy P.

    2010-01-01

    This document captures the research and development of a scale model representation of the Apollo 17 landing site on the moon as part of the NASA INSPIRE program. Several key elements in this model were surface slope characteristics, crater sizes and locations, prominent rocks, and lighting conditions. This model supports development of Autonomous Landing and Hazard Avoidance Technology (ALHAT) and Project M for the GN&C Autonomous Flight Systems Branch. It will help project engineers visualize the landing site, and is housed in the building 16 Navigation Systems Technology Lab. The lead mentor was Dr. Timothy P. Crain. The purpose of this project was to develop an accurate scale representation of the Apollo 17 landing site on the moon. This was done on an 8'2.5"X10'1.375" reduced friction granite table, which can be restored to its previous condition if needed. The first step in this project was to research the best way to model and recreate the Apollo 17 landing site for the mockup. The project required a thorough plan, budget, and schedule, which was presented to the EG6 Branch for build approval. The final phase was to build the model. The project also required thorough research on the Apollo 17 landing site and the topography of the moon. This research was done on the internet and in person with Dean Eppler, a space scientist, from JSC KX. This data was used to analyze and calculate the scale of the mockup and the ratio of the sizes of the craters, ridges, etc. The final goal was to effectively communicate project status and demonstrate the multiple advantages of using our model. The conclusion of this project was that the mockup was completed as accurately as possible, and it successfully enables the Project M specialists to visualize and plan their goal on an accurate three dimensional surface representation.

  19. A reexamination of age-related variation in body weight and morphometry of Maryland nutria

    USGS Publications Warehouse

    Sherfy, M.H.; Mollett, T.A.; McGowan, K.R.; Daugherty, S.L.

    2006-01-01

    Age-related variation in morphometry has been documented for many species. Knowledge of growth patterns can be useful for modeling energetics, detecting physiological influences on populations, and predicting age. These benefits have shown value in understanding population dynamics of invasive species, particularly in developing efficient control and eradication programs. However, development and evaluation of descriptive and predictive models is a critical initial step in this process. Accordingly, we used data from necropsies of 1,544 nutria (Myocastor coypus) collected in Maryland, USA, to evaluate the accuracy of previously published models for prediction of nutria age from body weight. Published models underestimated body weights of our animals, especially for ages <3. We used cross-validation procedures to develop and evaluate models for describing nutria growth patterns and for predicting nutria age. We derived models from a randomly selected model-building data set (n = 192-193 M, 217-222 F) and evaluated them with the remaining animals (n = 487-488 M, 642-647 F). We used nonlinear regression to develop Gompertz growth-curve models relating morphometric variables to age. Predicted values of morphometric variables fell within the 95% confidence limits of their true values for most age classes. We also developed predictive models for estimating nutria age from morphometry, using linear regression of log-transformed age on morphometric variables. The evaluation data set corresponded with 95% prediction intervals from the new models. Predictive models for body weight and length provided greater accuracy and less bias than models for foot length and axillary girth. Our growth models accurately described age-related variation in nutria morphometry, and our predictive models provided accurate estimates of ages from morphometry that will be useful for live-captured individuals. Our models offer better accuracy and precision than previously published models, providing a capacity for modeling energetics and growth patterns of Maryland nutria as well as an empirical basis for determining population age structure from live-captured animals.

  20. Developing a method for estimating AADT on all Louisiana roads.

    DOT National Transportation Integrated Search

    2015-07-01

    Traffic flow volumes present key information needed for making transportation engineering and planning decisions. : Accurate traffic volume count has many applications including: roadway planning, design, air quality compliance, travel : model valida...

  1. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  2. Pelagic Habitat Analysis Module (PHAM) for GIS Based Fisheries Decision Support

    NASA Technical Reports Server (NTRS)

    Kiefer, D. A.; Armstrong, Edward M.; Harrison, D. P.; Hinton, M. G.; Kohin, S.; Snyder, S.; O'Brien, F. J.

    2011-01-01

    We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus & pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus nd pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.

  3. NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.

    PubMed

    Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N

    2016-11-01

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.

  4. Skills Development Using Role-Play in a First-Year Pharmacy Practice Course

    PubMed Central

    2011-01-01

    Objectives. To evaluate the usefulness of a role-play model in developing students’ patient-care skills in a first-year undergraduate pharmacy practice course. Design. A role-play model was developed and implemented in workshops across 2 semesters of a year-long course. Students performed different roles, including that of a pharmacist and a patient, and documented case notes in a single interaction. Assessment. Student perceptions of the usefulness of the approach in acquiring skills were measured by surveying students during both semesters. All student assessments (N=130 in semester1; N=129 in semester 2) also were analyzed for skills in verbal communication, information gathering, counselling and making recommendations, and accurately documenting information. A majority of students found the approach useful in developing skills. An analysis of student assessments revealed that role-playing was not as effective in building skills related to accurate documentation as it was in other areas of patient care. Conclusions. Role play is useful for developing patient-care skills in communication and information gathering but not for documentation of case notes. PMID:21829258

  5. Probability-based collaborative filtering model for predicting gene-disease associations.

    PubMed

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  6. A Common Core for Active Conceptual Modeling for Learning from Surprises

    NASA Astrophysics Data System (ADS)

    Liddle, Stephen W.; Embley, David W.

    The new field of active conceptual modeling for learning from surprises (ACM-L) may be helpful in preserving life, protecting property, and improving quality of life. The conceptual modeling community has developed sound theory and practices for conceptual modeling that, if properly applied, could help analysts model and predict more accurately. In particular, we need to associate more semantics with links, and we need fully reified high-level objects and relationships that have a clear, formal underlying semantics that follows a natural, ontological approach. We also need to capture more dynamic aspects in our conceptual models to more accurately model complex, dynamic systems. These concepts already exist, and the theory is well developed; what remains is to link them with the ideas needed to predict system evolution, thus enabling risk assessment and response planning. No single researcher or research group will be able to achieve this ambitious vision alone. As a starting point, we recommend that the nascent ACM-L community agree on a common core model that supports all aspects—static and dynamic—needed for active conceptual modeling in support of learning from surprises. A common core will more likely gain the traction needed to sustain the extended ACM-L research effort that will yield the advertised benefits of learning from surprises.

  7. Stiffness degradation-based damage model for RC members and structures using fiber-beam elements

    NASA Astrophysics Data System (ADS)

    Guo, Zongming; Zhang, Yaoting; Lu, Jiezhi; Fan, Jian

    2016-12-01

    To meet the demand for an accurate and highly efficient damage model with a distinct physical meaning for performance-based earthquake engineering applications, a stiffness degradation-based damage model for reinforced concrete (RC) members and structures was developed using fiber beam-column elements. In this model, damage indices for concrete and steel fibers were defined by the degradation of the initial reloading modulus and the low-cycle fatigue law. Then, section, member, story and structure damage was evaluated by the degradation of the sectional bending stiffness, rod-end bending stiffness, story lateral stiffness and structure lateral stiffness, respectively. The damage model was realized in Matlab by reading in the outputs of OpenSees. The application of the damage model to RC columns and a RC frame indicates that the damage model is capable of accurately predicting the magnitude, position, and evolutionary process of damage, and estimating story damage more precisely than inter-story drift. Additionally, the damage model establishes a close connection between damage indices at various levels without introducing weighting coefficients or force-displacement relationships. The development of the model has perfected the damage assessment function of OpenSees, laying a solid foundation for damage estimation at various levels of a large-scale structure subjected to seismic loading.

  8. Real-Time Global Nonlinear Aerodynamic Modeling for Learn-To-Fly

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2016-01-01

    Flight testing and modeling techniques were developed to accurately identify global nonlinear aerodynamic models for aircraft in real time. The techniques were developed and demonstrated during flight testing of a remotely-piloted subscale propeller-driven fixed-wing aircraft using flight test maneuvers designed to simulate a Learn-To-Fly scenario. Prediction testing was used to evaluate the quality of the global models identified in real time. The real-time global nonlinear aerodynamic modeling algorithm will be integrated and further tested with learning adaptive control and guidance for NASA Learn-To-Fly concept flight demonstrations.

  9. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  10. Logic models as a tool for sexual violence prevention program development.

    PubMed

    Hawkins, Stephanie R; Clinton-Sherrod, A Monique; Irvin, Neil; Hart, Laurie; Russell, Sarah Jane

    2009-01-01

    Sexual violence is a growing public health problem, and there is an urgent need to develop sexual violence prevention programs. Logic models have emerged as a vital tool in program development. The Centers for Disease Control and Prevention funded an empowerment evaluation designed to work with programs focused on the prevention of first-time male perpetration of sexual violence, and it included as one of its goals, the development of program logic models. Two case studies are presented that describe how significant positive changes can be made to programs as a result of their developing logic models that accurately describe desired outcomes. The first case study describes how the logic model development process made an organization aware of the importance of a program's environmental context for program success; the second case study demonstrates how developing a program logic model can elucidate gaps in organizational programming and suggest ways to close those gaps.

  11. Space Flight Cable Model Development

    NASA Technical Reports Server (NTRS)

    Spak, Kaitlin

    2013-01-01

    This work concentrates the modeling efforts presented in last year's VSGC conference paper, "Model Development for Cable-Harnessed Beams." The focus is narrowed to modeling of space-flight cables only, as a reliable damped cable model is not yet readily available and is necessary to continue modeling cable-harnessed space structures. New experimental data is presented, eliminating the low-frequency noise that plagued the first year's efforts. The distributed transfer function method is applied to a single section of space flight cable for Euler-Bernoulli and shear beams. The work presented here will be developed into a damped cable model that can be incorporated into an interconnected beam-cable system. The overall goal of this work is to accurately predict natural frequencies and modal damping ratios for cabled space structures.

  12. Numerical modeling of the SNS H{sup −} ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A.; Beckwith, Kristian R. C.; Kundrapu, Madhusudhan

    Ion source rf antennas that produce H- ions can fail when plasma heating causes ablation of the insulating coating due to small structural defects such as cracks. Reducing antenna failures that reduce the operating capabilities of the Spallation Neutron Source (SNS) accelerator is one of the top priorities of the SNS H- Source Program at ORNL. Numerical modeling of ion sources can provide techniques for optimizing design in order to reduce antenna failures. There are a number of difficulties in developing accurate models of rf inductive plasmas. First, a large range of spatial and temporal scales must be resolved inmore » order to accurately capture the physics of plasma motion, including the Debye length, rf frequencies on the order of tens of MHz, simulation time scales of many hundreds of rf periods, large device sizes on tens of cm, and ion motions that are thousands of times slower than electrons. This results in large simulation domains with many computational cells for solving plasma and electromagnetic equations, short time steps, and long-duration simulations. In order to reduce the computational requirements, one can develop implicit models for both fields and particle motions (e.g. divergence-preserving ADI methods), various electrostatic models, or magnetohydrodynamic models. We have performed simulations using all three of these methods and have found that fluid models have the greatest potential for giving accurate solutions while still being fast enough to perform long timescale simulations in a reasonable amount of time. We have implemented a number of fluid models with electromagnetics using the simulation tool USim and applied them to modeling the SNS H- ion source. We found that a reduced, single-fluid MHD model with an imposed magnetic field due to the rf antenna current and the confining multi-cusp field generated increased bulk plasma velocities of > 200 m/s in the region of the antenna where ablation is often observed in the SNS source. We report here on comparisons of simulated plasma parameters and code performance using more accurate physical models, such as two-temperature extended MHD models, for both a related benchmark system describing a inductively coupled plasma reactor, and for the SNS ion source. We also present results from scaling studies for mesh generation and solvers in the USim simulation code.« less

  13. Child Psychopathy: Theories, Measurement, and Relations with the Development and Persistence of Conduct Problems

    ERIC Educational Resources Information Center

    Kotler, Julie S.; McMahon, Robert J.

    2005-01-01

    To develop more accurate explanatory and predictive models of child and adolescent conduct problems, interest has grown in examining psychopathic traits in youth. The presence or absence of these traits may help to identify unique etiological pathways in the development of antisocial behavior. The current review provides a detailed summary and…

  14. Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study

    USDA-ARS?s Scientific Manuscript database

    Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...

  15. Market Acceleration | Wind | NREL

    Science.gov Websites

    model of a shrouded wind turbine at the 2016 Collegiate Wind Competition. Workforce Development and accurate information that articulates the potential impacts and benefits of wind and water power on education, rural economic development, public power partnerships, and small wind systems. An

  16. Predicting maize phenology: Intercomparison of functions for developmental response to temperature

    USDA-ARS?s Scientific Manuscript database

    Accurate prediction of phenological development in maize is fundamental to determining crop adaptation and yield potential. A number of thermal functions are used in crop models, but their relative precision in predicting maize development has not been quantified. The objectives of this study were t...

  17. Observations from the field: further developing linkages between soil C models with long-term bioenergy studies

    USDA-ARS?s Scientific Manuscript database

    Biofuel feedstocks are being developed and evaluated in the United States and Europe to partially offset petroleum transport fuels. Accurate accounting of upstream and downstream greenhouse gas (GHG) emissions is necessary to measure the overall carbon intensity of new biofuel feedstocks. Changes in...

  18. Kinetics and mechanism of soot formation in hydrocarbon combustion

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael

    1990-01-01

    The focus of this work was on kinetic modeling. The specific objectives were: detailed modeling of soot formation in premixed flames, elucidation of the effects of fuel structure on the pathway to soot, and the development of a numerical technique for accurate modeling of soot particle coagulation and surface growth. Those tasks were successfully completed and are briefly summarized.

  19. Calibration of a γ- Re θ transition model and its application in low-speed flows

    NASA Astrophysics Data System (ADS)

    Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song

    2014-12-01

    The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.

  20. Review of Thawing Time Prediction Models Depending
on Process Conditions and Product Characteristics

    PubMed Central

    Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna

    2016-01-01

    Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387

  1. Study on elevated-temperature flow behavior of Ni-Cr-Mo-B ultra-heavy-plate steel via experiment and modelling

    NASA Astrophysics Data System (ADS)

    Gao, Zhi-yu; Kang, Yu; Li, Yan-shuai; Meng, Chao; Pan, Tao

    2018-04-01

    Elevated-temperature flow behavior of a novel Ni-Cr-Mo-B ultra-heavy-plate steel was investigated by conducting hot compressive deformation tests on a Gleeble-3800 thermo-mechanical simulator at a temperature range of 1123 K–1423 K with a strain rate range from 0.01 s‑1 to10 s‑1 and a height reduction of 70%. Based on the experimental results, classic strain-compensated Arrhenius-type, a new revised strain-compensated Arrhenius-type and classic modified Johnson-Cook constitutive models were developed for predicting the high-temperature deformation behavior of the steel. The predictability of these models were comparatively evaluated in terms of statistical parameters including correlation coefficient (R), average absolute relative error (AARE), average root mean square error (RMSE), normalized mean bias error (NMBE) and relative error. The statistical results indicate that the new revised strain-compensated Arrhenius-type model could give prediction of elevated-temperature flow stress for the steel accurately under the entire process conditions. However, the predicted values by the classic modified Johnson-Cook model could not agree well with the experimental values, and the classic strain-compensated Arrhenius-type model could track the deformation behavior more accurately compared with the modified Johnson-Cook model, but less accurately with the new revised strain-compensated Arrhenius-type model. In addition, reasons of differences in predictability of these models were discussed in detail.

  2. The Next Generation of High-Speed Dynamic Stability Wind Tunnel Testing (Invited)

    NASA Technical Reports Server (NTRS)

    Tomek, Deborah M.; Sewall, William G.; Mason, Stan E.; Szchur, Bill W. A.

    2006-01-01

    Throughout industry, accurate measurement and modeling of dynamic derivative data at high-speed conditions has been an ongoing challenge. The expansion of flight envelopes and non-conventional vehicle design has greatly increased the demand for accurate prediction and modeling of vehicle dynamic behavior. With these issues in mind, NASA Langley Research Center (LaRC) embarked on the development and shakedown of a high-speed dynamic stability test technique that addresses the longstanding problem of accurately measuring dynamic derivatives outside the low-speed regime. The new test technique was built upon legacy technology, replacing an antiquated forced oscillation system, and greatly expanding the capabilities beyond classic forced oscillation testing at both low and high speeds. The modern system is capable of providing a snapshot of dynamic behavior over a periodic cycle for varying frequencies, not just a damping derivative term at a single frequency.

  3. Humus and humility in ecosystem model design

    NASA Astrophysics Data System (ADS)

    Rowe, Ed

    2015-04-01

    Prediction is central to science. Empirical scientists couch their predictions as hypotheses and tend to deal with simple models such as regressions, but are modellers as much as are those who combine mechanistic hypotheses into more complex models. There are two main challenges for both groups: to strive for accurate predictions, and to ensure that the work is relevant to wider society. There is a role for blue-sky research, but the multiple environmental changes that characterise the 21st century place an onus on ecosystem scientists to develop tools for understanding environmental change and planning responses. Authors such as Funtowicz and Ravetz (1990) have argued that this situation represents "post-normal" science and that scientists should see themselves more humbly as actors within a societal process rather than as arbiters of truth. Modellers aim for generality, e.g. to accurately simulate the responses of a variety of ecosystems to several different environmental drivers. More accurate predictions can usually be achieved by including more explanatory factors or mechanisms in a model, even though this often results in a less efficient, less parsimonious model. This drives models towards ever-increasing complexity, and many models grow until they are effectively unusable beyond their development team. An alternative way forward is to focus on developing component models. Technologies for integrating dynamic models emphasise the removal of the model engine (algorithms) from code which handles time-stepping and the user interface. Developing components also requires some humility on the part of modellers, since collaboration will be needed to represent the whole system, and also because the idea that a simple component can or should represent the entire understanding of a scientific discipline is often difficult to accept. Policy-makers and land managers typically have questions different to those posed by scientists working within a specialism, and models that are developed in collaboration with stakeholders are much more likely to be used (Sterk et al., 2012). Rather than trying to re-frame the question to suit the model, modellers need the humility to accept that the model is inappropriate and should develop the capacity to model the question. In this study these issues are explored using the MADOC model (Rowe et al., 2014) as an example. MADOC was developed by integrating existing models of humus development, acid-base exchange, and organic matter dissolution to answer a particular policy question: how do acidifying pollutants affect pH in humic soils? Including the negative feedback whereby an increase in pH reduces the solubility of organic acids improved the predictive accuracy for pH and dissolved organic carbon flux in the peats and organomineral soils that are widespread in upland Britain. The model has been used to generate the UK response to data requests under the UN Convention on Long-Range Transboundary Air Pollution. References: Funtowicz, S.O. & Ravetz, J.R., 1990. Uncertainty and Quality in Science for Policy. Kluwer. Rowe, E.C., et al. 2014. Environmental Pollution 184, 271-282. Sterk, B., et al. 2012. Environmental Modelling & Software 26, 310-316.

  4. MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P.; Jablonowski, C.

    2011-12-01

    The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.

  5. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  6. Development and validation of a regional coupled forecasting system for S2S forecasts

    NASA Astrophysics Data System (ADS)

    Sun, R.; Subramanian, A. C.; Hoteit, I.; Miller, A. J.; Ralph, M.; Cornuelle, B. D.

    2017-12-01

    Accurate and efficient forecasting of oceanic and atmospheric circulation is essential for a wide variety of high-impact societal needs, including: weather extremes; environmental protection and coastal management; management of fisheries, marine conservation; water resources; and renewable energy. Effective forecasting relies on high model fidelity and accurate initialization of the models with observed state of the ocean-atmosphere-land coupled system. A regional coupled ocean-atmosphere model with the Weather Research and Forecasting (WRF) model and the MITGCM ocean model coupled using the ESMF (Earth System Modeling Framework) coupling framework is developed to resolve mesoscale air-sea feedbacks. The regional coupled model allows oceanic mixed layer heat and momentum to interact with the atmospheric boundary layer dynamics at the mesoscale and submesoscale spatiotemporal regimes, thus leading to feedbacks which are otherwise not resolved in coarse resolution global coupled forecasting systems or regional uncoupled forecasting systems. The model is tested in two scenarios in the mesoscale eddy rich Red Sea and Western Indian Ocean region as well as mesoscale eddies and fronts of the California Current System. Recent studies show evidence for air-sea interactions involving the oceanic mesoscale in these two regions which can enhance predictability on sub seasonal timescale. We will present results from this newly developed regional coupled ocean-atmosphere model for forecasts over the Red Sea region as well as the California Current region. The forecasts will be validated against insitu observations in the region as well as reanalysis fields.

  7. A narrow-band k-distribution model with single mixture gas assumption for radiative flows

    NASA Astrophysics Data System (ADS)

    Jo, Sung Min; Kim, Jae Won; Kwon, Oh Joon

    2018-06-01

    In the present study, the narrow-band k-distribution (NBK) model parameters for mixtures of H2O, CO2, and CO are proposed by utilizing the line-by-line (LBL) calculations with a single mixture gas assumption. For the application of the NBK model to radiative flows, a radiative transfer equation (RTE) solver based on a finite-volume method on unstructured meshes was developed. The NBK model and the RTE solver were verified by solving two benchmark problems including the spectral radiance distribution emitted from one-dimensional slabs and the radiative heat transfer in a truncated conical enclosure. It was shown that the results are accurate and physically reliable by comparing with available data. To examine the applicability of the methods to realistic multi-dimensional problems in non-isothermal and non-homogeneous conditions, radiation in an axisymmetric combustion chamber was analyzed, and then the infrared signature emitted from an aircraft exhaust plume was predicted. For modeling the plume flow involving radiative cooling, a flow-radiation coupled procedure was devised in a loosely coupled manner by adopting a Navier-Stokes flow solver based on unstructured meshes. It was shown that the predicted radiative cooling for the combustion chamber is physically more accurate than other predictions, and is as accurate as that by the LBL calculations. It was found that the infrared signature of aircraft exhaust plume can also be obtained accurately, equivalent to the LBL calculations, by using the present narrow-band approach with a much improved numerical efficiency.

  8. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  9. On the dimension of complex responses in nonlinear structural vibrations

    NASA Astrophysics Data System (ADS)

    Wiebe, R.; Spottswood, S. M.

    2016-07-01

    The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.

  10. Accurate prediction of energy expenditure using a shoe-based activity monitor.

    PubMed

    Sazonova, Nadezhda; Browning, Raymond C; Sazonov, Edward

    2011-07-01

    The aim of this study was to develop and validate a method for predicting energy expenditure (EE) using a footwear-based system with integrated accelerometer and pressure sensors. We developed a footwear-based device with an embedded accelerometer and insole pressure sensors for the prediction of EE. The data from the device can be used to perform accurate recognition of major postures and activities and to estimate EE using the acceleration, pressure, and posture/activity classification information in a branched algorithm without the need for individual calibration. We measured EE via indirect calorimetry as 16 adults (body mass index=19-39 kg·m) performed various low- to moderate-intensity activities and compared measured versus predicted EE using several models based on the acceleration and pressure signals. Inclusion of pressure data resulted in better accuracy of EE prediction during static postures such as sitting and standing. The activity-based branched model that included predictors from accelerometer and pressure sensors (BACC-PS) achieved the lowest error (e.g., root mean squared error (RMSE)=0.69 METs) compared with the accelerometer-only-based branched model BACC (RMSE=0.77 METs) and nonbranched model (RMSE=0.94-0.99 METs). Comparison of EE prediction models using data from both legs versus models using data from a single leg indicates that only one shoe needs to be equipped with sensors. These results suggest that foot acceleration combined with insole pressure measurement, when used in an activity-specific branched model, can accurately estimate the EE associated with common daily postures and activities. The accuracy and unobtrusiveness of a footwear-based device may make it an effective physical activity monitoring tool.

  11. Effective prediction of biodiversity in tidal flat habitats using an artificial neural network.

    PubMed

    Yoo, Jae-Won; Lee, Yong-Woo; Lee, Chang-Gun; Kim, Chang-Soo

    2013-02-01

    Accurate predictions of benthic macrofaunal biodiversity greatly benefit the efficient planning and management of habitat restoration efforts in tidal flat habitats. Artificial neural network (ANN) prediction models for such biodiversity were developed and tested based on 13 biophysical variables, collected from 50 sites of tidal flats along the coast of Korea during 1991-2006. The developed model showed high predictions during training, cross-validation and testing. Besides the training and testing procedures, an independent dataset from a different time period (2007-2010) was used to test the robustness and practical usage of the model. High prediction on the independent dataset (r = 0.84) validated the networks proper learning of predictive relationship and its generality. Key influential variables identified by follow-up sensitivity analyses were related with topographic dimension, environmental heterogeneity, and water column properties. Study demonstrates the successful application of ANN for the accurate prediction of benthic macrofaunal biodiversity and understanding of dynamics of candidate variables. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. MULTISCALE ADAPTIVE SMOOTHING MODELS FOR THE HEMODYNAMIC RESPONSE FUNCTION IN FMRI*

    PubMed Central

    Wang, Jiaping; Zhu, Hongtu; Fan, Jianqing; Giovanello, Kelly; Lin, Weili

    2012-01-01

    In the event-related functional magnetic resonance imaging (fMRI) data analysis, there is an extensive interest in accurately and robustly estimating the hemodynamic response function (HRF) and its associated statistics (e.g., the magnitude and duration of the activation). Most methods to date are developed in the time domain and they have utilized almost exclusively the temporal information of fMRI data without accounting for the spatial information. The aim of this paper is to develop a multiscale adaptive smoothing model (MASM) in the frequency domain by integrating the spatial and temporal information to adaptively and accurately estimate HRFs pertaining to each stimulus sequence across all voxels in a three-dimensional (3D) volume. We use two sets of simulation studies and a real data set to examine the finite sample performance of MASM in estimating HRFs. Our real and simulated data analyses confirm that MASM outperforms several other state-of-art methods, such as the smooth finite impulse response (sFIR) model. PMID:24533041

  13. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  14. Suomi NPP VIIRS solar diffuser screen transmittance model and its applications.

    PubMed

    Lei, Ning; Xiong, Xiaoxiong; Mcintire, Jeff

    2017-11-01

    The visible infrared imaging radiometer suite on the Suomi National Polar-orbiting Partnership satellite calibrates its reflective solar bands through observations of a sunlit solar diffuser (SD) panel. Sunlight passes through a perforated plate, referred to as the SD screen, before reaching the SD. It is critical to know whether the SD screen transmittance measured prelaunch is accurate. Several factors such as misalignments of the SD panel and the measurement apparatus could lead to errors in the measured transmittance and thus adversely impact on-orbit calibration quality through the SD. We develop a mathematical model to describe the transmittance as a function of the angles that incident light makes with the SD screen, and apply the model to fit the prelaunch measured transmittance. The results reveal that the model does not reproduce the measured transmittance unless the size of the apertures in the SD screen is quite different from the design value. We attribute the difference to the orientation alignment errors for the SD panel and the measurement apparatus. We model the alignment errors and apply our transmittance model to fit the prelaunch transmittance to retrieve the "true" transmittance. To use this model correctly, we also examine the finite source size effect on the transmittance. Furthermore, we compare the product of the retrieved "true" transmittance and the prelaunch SD bidirectional reflectance distribution function (BRDF) value to the value derived from on-orbit data to determine whether the prelaunch SD BRDF value is relatively accurate. The model is significant in that it can evaluate whether the SD screen transmittance measured prelaunch is accurate and help retrieve the true transmittance from the transmittance with measurement errors, consequently resulting in a more accurate sensor data product by the same amount.

  15. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  16. Solid waste forecasting using modified ANFIS modeling.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; K N A, Maulud

    2015-10-01

    Solid waste prediction is crucial for sustainable solid waste management. Usually, accurate waste generation record is challenge in developing countries which complicates the modelling process. Solid waste generation is related to demographic, economic, and social factors. However, these factors are highly varied due to population and economy growths. The objective of this research is to determine the most influencing demographic and economic factors that affect solid waste generation using systematic approach, and then develop a model to forecast solid waste generation using a modified Adaptive Neural Inference System (MANFIS). The model evaluation was performed using Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the coefficient of determination (R²). The results show that the best input variables are people age groups 0-14, 15-64, and people above 65 years, and the best model structure is 3 triangular fuzzy membership functions and 27 fuzzy rules. The model has been validated using testing data and the resulted training RMSE, MAE and R² were 0.2678, 0.045 and 0.99, respectively, while for testing phase RMSE =3.986, MAE = 0.673 and R² = 0.98. To date, a few attempts have been made to predict the annual solid waste generation in developing countries. This paper presents modeling of annual solid waste generation using Modified ANFIS, it is a systematic approach to search for the most influencing factors and then modify the ANFIS structure to simplify the model. The proposed method can be used to forecast the waste generation in such developing countries where accurate reliable data is not always available. Moreover, annual solid waste prediction is essential for sustainable planning.

  17. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.

  18. Automated adaptive inference of phenomenological dynamical models.

    PubMed

    Daniels, Bryan C; Nemenman, Ilya

    2015-08-21

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  19. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  20. IBS for non-gaussian distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fedotov, A.; Sidorin, A.O.; Smirnov, A.V.

    In many situations distribution can significantly deviate from Gaussian which requires accurate treatment of IBS. Our original interest in this problem was motivated by the need to have an accurate description of beam evolution due to IBS while distribution is strongly affected by the external electron cooling force. A variety of models with various degrees of approximation were developed and implemented in BETACOOL in the past to address this topic. A more complete treatment based on the friction coefficient and full 3-D diffusion tensor was introduced in BETACOOL at the end of 2007 under the name 'local IBS model'. Suchmore » a model allowed us calculation of IBS for an arbitrary beam distribution. The numerical benchmarking of this local IBS algorithm and its comparison with other models was reported before. In this paper, after briefly describing the model and its limitations, they present its comparison with available experimental data.« less

  1. A hybrid intelligent method for three-dimensional short-term prediction of dissolved oxygen content in aquaculture.

    PubMed

    Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang

    2018-01-01

    A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.

  2. A new powerful parameterization tool for managing groundwater resources and predicting land subsidence in Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Nunes, V. D.; Burbey, T. J.; Borggaard, J.

    2012-12-01

    More than 1.5 m of subsidence has been observed in Las Vegas Valley since 1935 as a result of groundwater pumping that commenced in 1905 (Bell, 2002). The compaction of the aquifer system has led to several large subsidence bowls and deleterious earth fissures. The highly heterogeneous aquifer system with its variably thick interbeds makes predicting the magnitude and location of subsidence extremely difficult. Several numerical groundwater flow models of the Las Vegas basin have been previously developed; however none of them have been able to accurately simulate the observed subsidence patterns or magnitudes because of inadequate parameterization. To better manage groundwater resources and predict future subsidence we have updated and developed a more accurate groundwater management model for Las Vegas Valley by developing a new adjoint parameter estimation package (APE) that is used in conjunction with UCODE along with MODFLOW and the SUB (subsidence) and HFB (horizontal flow barrier) packages. The APE package is used with UCODE to automatically identify suitable parameter zonations and inversely calculate parameter values from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Ske) and inelastic (Skv) storage coefficients. With the advent of InSAR (Interferometric synthetic aperture radar), distributed spatial and temporal subsidence measurements can be obtained, which greatly enhance the accuracy of parameter estimation. This automation process can remove user bias and provide a far more accurate and robust parameter zonation distribution. The outcome of this work yields a more accurate and powerful tool for managing groundwater resources in Las Vegas Valley to date.

  3. Development of a new global radiation belt model

    NASA Astrophysics Data System (ADS)

    Sicard, Angelica; Boscher, Daniel; Bourdarie, Sébastien; Lazaro, Didier; Maget, Vincent; Ecoffet, Robert; Rolland, Guy; Standarovski, Denis

    2017-04-01

    The well known AP8 and AE8 NASA models are commonly used in the industry to specify the radiation belt environment. Unfortunately, there are some limitations in the use of these models, first due to the covered energy range, but also because in some regions of space, there are discrepancies between the predicted average values and the measurements. Therefore, our aim is to develop a radiation belt model, covering a large region of space and energy, from LEO altitudes to GEO and above, and from plasma to relativistic particles. The aim for the first version of this new model is to correct the AP8 and AE8 models where they are deficient or not defined. At geostationary, we developed ten years ago for electrons the IGE-2006 model which was proven to be more accurate than AE8, and used commonly in the industry, covering a broad energy range, from 1keV to 5MeV. From then, a proton model for geostationary orbit was also developed for material applications, followed by the OZONE model covering a narrower energy range but the whole outer electron belt, a SLOT model to asses average electron values for 2

  4. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  5. Human eyeball model reconstruction and quantitative analysis.

    PubMed

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  6. Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data

    NASA Astrophysics Data System (ADS)

    Singh, Vishwajit

    2016-04-01

    This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.

  7. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  8. A Generalized Orthotropic Elasto-Plastic Material Model for Impact Analysis

    NASA Astrophysics Data System (ADS)

    Hoffarth, Canio

    Composite materials are now beginning to provide uses hitherto reserved for metals in structural systems such as airframes and engine containment systems, wraps for repair and rehabilitation, and ballistic/blast mitigation systems. These structural systems are often subjected to impact loads and there is a pressing need for accurate prediction of deformation, damage and failure. There are numerous material models that have been developed to analyze the dynamic impact response of polymer matrix composites. However, there are key features that are missing in those models that prevent them from providing accurate predictive capabilities. In this dissertation, a general purpose orthotropic elasto-plastic computational constitutive material model has been developed to predict the response of composites subjected to high velocity impacts. The constitutive model is divided into three components - deformation model, damage model and failure model, with failure to be added at a later date. The deformation model generalizes the Tsai-Wu failure criteria and extends it using a strain-hardening-based orthotropic yield function with a non-associative flow rule. A strain equivalent formulation is utilized in the damage model that permits plastic and damage calculations to be uncoupled and capture the nonlinear unloading and local softening of the stress-strain response. A diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The overall framework is driven by experimental tabulated temperature and rate-dependent stress-strain data as well as data that characterizes the damage matrix and failure. The developed theory has been implemented in a commercial explicit finite element analysis code, LS-DYNARTM, as MAT213. Several verification and validation tests using a commonly available carbon-fiber composite, Toyobo's T800/F3900, have been carried and the results show that the theory and implementation are efficient, robust and accurate.

  9. The utility of bathymetric echo sounding data in modelling benthic impacts using NewDEPOMOD driven by an FVCOM model.

    NASA Astrophysics Data System (ADS)

    Rochford, Meghan; Black, Kenneth; Aleynik, Dmitry; Carpenter, Trevor

    2017-04-01

    The Scottish Environmental Protection Agency (SEPA) are currently implementing new regulations for consenting developments at new and pre-existing fish farms. Currently, a 15-day current record from multiple depths at one location near the site is required to run DEPOMOD, a depositional model used to determine the depositional footprint of waste material from fish farms, developed by Cromey et al. (2002). The present project involves modifying DEPOMOD to accept data from 3D hydrodynamic models to allow for a more accurate representation of the currents around the farms. Bathymetric data are key boundary conditions for accurate modelling of current velocity data. The aim of the project is to create a script that will use the outputs from FVCOM, a 3D hydrodynamic model developed by Chen et al. (2003), and input them into NewDEPOMOD (a new version of DEPOMOD with more accurately parameterised sediment transport processes) to determine the effect of a fish farm on the surrounding environment. This study compares current velocity data under two scenarios; the first, using interpolated bathymetric data, and the second using bathymetric data collected during a bathymetric echo sounding survey of the site. Theoretically, if the hydrodynamic model is of high enough resolution, the two scenarios should yield relatively similar results. However, the expected result is that the survey data will be of much higher resolution and therefore of better quality, producing more realistic velocity results. The improvement of bathymetric data will also improve sediment transport predictions in NewDEPOMOD. This work will determine the sensitivity of model predictions to bathymetric data accuracy at a range of sites with varying bathymetric complexity and thus give information on the potential costs and benefits of echo sounding survey data inputs. Chen, C., Liu, H. and Beardsley, R.C., 2003. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model: application to coastal ocean and estuaries. Journal of atmospheric and oceanic technology, 20(1), pp.159-186. Cromey, C.J., Nickell, T.D. and Black, K.D., 2002. DEPOMOD—modelling the deposition and biological effects of waste solids from marine cage farms. Aquaculture, 214(1), pp.211-239.

  10. Testing DRAINMOD-FOREST for predicting evapotranspiration in a mid-rotation pine plantation

    Treesearch

    Shiying Tian; Mohamed A. Youssef; Ge Sun; George M. Chescheir; Asko Noormets; Devendra M. Amatya; R. Wayne Skaggs; John S. King; Steve McNulty; Michael Gavazzi; Guofang Miao; Jean-Christophe Domec

    2015-01-01

    Evapotranspiration (ET) is a key component of the hydrologic cycle in terrestrial ecosystems and accurate description of ET processes is essential for developing reliable ecohydrological models. This study investigated the accuracy of ET prediction by the DRAINMOD-FOREST after its calibration/validation for predicting commonly measured hydrological variables. The model...

  11. Sampling and modeling riparian forest structure and riparian microclimate

    Treesearch

    Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen

    2013-01-01

    Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...

  12. Demonstrating the Relationship between School Nurse Workload and Student Outcomes

    ERIC Educational Resources Information Center

    Daughtry, Donna; Engelke, Martha Keehner

    2018-01-01

    This article describes how one very large, diverse school district developed a Student Acuity Tool for School Nurse Assignment and used a logic model to successfully advocate for additional school nurse positions. The logic model included three student outcomes that were evaluated: provide medications and procedures safely and accurately, increase…

  13. Perceptual Veridicality in Esthetic Communication: A Model, General Procedure, and Illustration.

    ERIC Educational Resources Information Center

    Holbrook, Morris B.; Bertges, Stephen A.

    1981-01-01

    Developed a model and tested the following hypotheses: (1) esthetic features--tempo, rhythm, dynamics, and phrasing in piano performance--are accurately perceived by audience members and (2) such perceptual veridicality does not depend upon one's degree of education/training and is therefore shared by critics and audience members. (PD)

  14. A Conceptual Model of the World of Work.

    ERIC Educational Resources Information Center

    VanRooy, William H.

    The conceptual model described in this paper resulted from the need to organize a body of knowledge related to the world of work which would enable curriculum developers to prepare accurate, realistic instructional materials. The world of work is described by applying Malinowski's scientific study of the structural components of culture. It is…

  15. Rocket Fuel R and D at AFRL: Recent Activities and Future Direction

    DTIC Science & Technology

    2017-04-12

    Clearance Number 17163 Rocket Cycles and Environments SpaceX Merlin 1D 190 klbf Russian RD-180 860 klbf Gas Generator Cycle Ox-Rich Staged Combustion...affordability & reusability • Modeling & Simulation • Key to development • Requires accurate models “CFD simulations… shorten the test-fail-fix loop” SpaceX

  16. Organoids as Models for Neoplastic Transformation | Office of Cancer Genomics

    Cancer.gov

    Cancer models strive to recapitulate the incredible diversity inherent in human tumors. A key challenge in accurate tumor modeling lies in capturing the panoply of homo- and heterotypic cellular interactions within the context of a three-dimensional tissue microenvironment. To address this challenge, researchers have developed organotypic cancer models (organoids) that combine the 3D architecture of in vivo tissues with the experimental facility of 2D cell lines.

  17. Predicting the Dynamic Crushing Response of a Composite Honeycomb Energy Absorber Using Solid-Element-Based Models in LS-DYNA

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.

    2010-01-01

    This paper describes an analytical study that was performed as part of the development of an externally deployable energy absorber (DEA) concept. The concept consists of a composite honeycomb structure that can be stowed until needed to provide energy attenuation during a crash event, much like an external airbag system. One goal of the DEA development project was to generate a robust and reliable Finite Element Model (FEM) of the DEA that could be used to accurately predict its crush response under dynamic loading. The results of dynamic crush tests of 50-, 104-, and 68-cell DEA components are presented, and compared with simulation results from a solid-element FEM. Simulations of the FEM were performed in LS-DYNA(Registered TradeMark) to compare the capabilities of three different material models: MAT 63 (crushable foam), MAT 26 (honeycomb), and MAT 126 (modified honeycomb). These material models are evaluated to determine if they can be used to accurately predict both the uniform crushing and final compaction phases of the DEA for normal and off-axis loading conditions

  18. Decision-Tree Analysis for Predicting First-Time Pass/Fail Rates for the NCLEX-RN® in Associate Degree Nursing Students.

    PubMed

    Chen, Hsiu-Chin; Bennett, Sean

    2016-08-01

    Little evidence shows the use of decision-tree algorithms in identifying predictors and analyzing their associations with pass rates for the NCLEX-RN(®) in associate degree nursing students. This longitudinal and retrospective cohort study investigated whether a decision-tree algorithm could be used to develop an accurate prediction model for the students' passing or failing the NCLEX-RN. This study used archived data from 453 associate degree nursing students in a selected program. The chi-squared automatic interaction detection analysis of the decision trees module was used to examine the effect of the collected predictors on passing/failing the NCLEX-RN. The actual percentage scores of Assessment Technologies Institute®'s RN Comprehensive Predictor(®) accurately identified students at risk of failing. The classification model correctly classified 92.7% of the students for passing. This study applied the decision-tree model to analyze a sequence database for developing a prediction model for early remediation in preparation for the NCLEXRN. [J Nurs Educ. 2016;55(8):454-457.]. Copyright 2016, SLACK Incorporated.

  19. A Comprehensive X-Ray Absorption Model for Atomic Oxygen

    NASA Technical Reports Server (NTRS)

    Gorczyca, T. W.; Bautista, M. A.; Hasoglu, M. F.; Garcia, J.; Gatuzz, E.; Kaastra, J. S.; Kallman, T. R.; Manson, S. T.; Mendoza, C.; Raassen, A. J. J.; hide

    2013-01-01

    An analytical formula is developed to accurately represent the photoabsorption cross section of atomic Oxygen for all energies of interest in X-ray spectral modeling. In the vicinity of the K edge, a Rydberg series expression is used to fit R-matrix results, including important orbital relaxation effects, that accurately predict the absorption oscillator strengths below threshold and merge consistently and continuously to the above-threshold cross section. Further, minor adjustments are made to the threshold energies in order to reliably align the atomic Rydberg resonances after consideration of both experimental and observed line positions. At energies far below or above the K-edge region, the formulation is based on both outer- and inner-shell direct photoionization, including significant shake-up and shake-off processes that result in photoionization-excitation and double-photoionization contributions to the total cross section. The ultimate purpose for developing a definitive model for oxygen absorption is to resolve standing discrepancies between the astronomically observed and laboratory-measured line positions, and between the inferred atomic and molecular oxygen abundances in the interstellar medium from XSTAR and SPEX spectral models.

  20. Mathematical modeling and experimental testing of three bioreactor configurations based on windkessel models

    PubMed Central

    Ruel, Jean; Lachance, Geneviève

    2010-01-01

    This paper presents an experimental study of three bioreactor configurations. The bioreactor is intended to be used for the development of tissue-engineered heart valve substitutes. Therefore it must be able to reproduce physiological flow and pressure waveforms accurately. A detailed analysis of three bioreactor arrangements is presented using mathematical models based on the windkessel (WK) approach. First, a review of the many applications of this approach in medical studies enhances its fundamental nature and its usefulness. Then the models are developed with reference to the actual components of the bioreactor. This study emphasizes different conflicting issues arising in the design process of a bioreactor for biomedical purposes, where an optimization process is essential to reach a compromise satisfying all conditions. Two important aspects are the need for a simple system providing ease of use and long-term sterility, opposed to the need for an advanced (thus more complex) architecture capable of a more accurate reproduction of the physiological environment. Three classic WK architectures are analyzed, and experimental results enhance the advantages and limitations of each one. PMID:21977286

  1. Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)

    2001-01-01

    Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.

  2. WILLIAMSBURG BROOKLYN ASTHMA AND ENVIRONMENT CONSORTIUM

    EPA Science Inventory

    The Consortium expects to develop a family health promotion model in which organized residents have access to easily understood, scientifically accurate, community-specific information about their health, their environment, and the relationship between the two,...

  3. High-Temperature Cast Aluminum for Efficient Engines

    NASA Astrophysics Data System (ADS)

    Bobel, Andrew C.

    Accurate thermodynamic databases are the foundation of predictive microstructure and property models. An initial assessment of the commercially available Thermo-Calc TCAL2 database and the proprietary aluminum database of QuesTek demonstrated a large degree of deviation with respect to equilibrium precipitate phase prediction in the compositional region of interest when compared to 3-D atom probe tomography (3DAPT) and transmission electron microscopy (TEM) experimental results. New compositional measurements of the Q-phase (Al-Cu-Mg-Si phase) led to a remodeling of the Q-phase thermodynamic description in the CALPHAD databases which has produced significant improvements in the phase prediction capabilities of the thermodynamic model. Due to the unique morphologies of strengthening precipitate phases commonly utilized in high-strength cast aluminum alloys, the development of new microstructural evolution models to describe both rod and plate particle growth was critical for accurate mechanistic strength models which rely heavily on precipitate size and shape. Particle size measurements through both 3DAPT and TEM experiments were used in conjunction with literature results of many alloy compositions to develop a physical growth model for the independent prediction of rod radii and rod length evolution. In addition a machine learning (ML) model was developed for the independent prediction of plate thickness and plate diameter evolution as a function of alloy composition, aging temperature, and aging time. The developed models are then compared with physical growth laws developed for spheres and modified for ellipsoidal morphology effects. Analysis of the effect of particle morphology on strength enhancement has been undertaken by modification of the Orowan-Ashby equation for 〈110〉 alpha-Al oriented finite rods in addition to an appropriate version for similarly oriented plates. A mechanistic strengthening model was developed for cast aluminum alloys containing both rod and plate-like precipitates. The model accurately accounts for the temperature dependence of particle nucleation and growth, solid solution strengthening, Si eutectic strength, and base aluminum yield strength. Strengthening model predictions of tensile yield strength are in excellent agreement with experimental observations over a wide range of aluminum alloy systems, aging temperatures, and test conditions. The developed models enable the prediction of the required particle morphology and volume fraction necessary to achieve target property goals in the design of future aluminum alloys. The effect of partitioning elements to the Q-phase was also considered for the potential to control the nucleation rate, reduce coarsening, and control the evolution of particle morphology. Elements were selected based on density functional theory (DFT) calculations showing the prevalence of certain elements to partition to the Q-phase. 3DAPT experiments were performed on Q-phase containing wrought alloys with these additions and show segregation of certain elements to the Q-phase with relative agreement to DFT predictions.

  4. Construction and completion of flux balance models from pathway databases.

    PubMed

    Latendresse, Mario; Krummenacker, Markus; Trupp, Miles; Karp, Peter D

    2012-02-01

    Flux balance analysis (FBA) is a well-known technique for genome-scale modeling of metabolic flux. Typically, an FBA formulation requires the accurate specification of four sets: biochemical reactions, biomass metabolites, nutrients and secreted metabolites. The development of FBA models can be time consuming and tedious because of the difficulty in assembling completely accurate descriptions of these sets, and in identifying errors in the composition of these sets. For example, the presence of a single non-producible metabolite in the biomass will make the entire model infeasible. Other difficulties in FBA modeling are that model distributions, and predicted fluxes, can be cryptic and difficult to understand. We present a multiple gap-filling method to accelerate the development of FBA models using a new tool, called MetaFlux, based on mixed integer linear programming (MILP). The method suggests corrections to the sets of reactions, biomass metabolites, nutrients and secretions. The method generates FBA models directly from Pathway/Genome Databases. Thus, FBA models developed in this framework are easily queried and visualized using the Pathway Tools software. Predicted fluxes are more easily comprehended by visualizing them on diagrams of individual metabolic pathways or of metabolic maps. MetaFlux can also remove redundant high-flux loops, solve FBA models once they are generated and model the effects of gene knockouts. MetaFlux has been validated through construction of FBA models for Escherichia coli and Homo sapiens. Pathway Tools with MetaFlux is freely available to academic users, and for a fee to commercial users. Download from: biocyc.org/download.shtml. mario.latendresse@sri.com Supplementary data are available at Bioinformatics online.

  5. Development and application of an acceptance testing model

    NASA Technical Reports Server (NTRS)

    Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.

    1992-01-01

    The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.

  6. Non-integer viscoelastic constitutive law to model soft biological tissues to in-vivo indentation.

    PubMed

    Demirci, Nagehan; Tönük, Ergin

    2014-01-01

    During the last decades, derivatives and integrals of non-integer orders are being more commonly used for the description of constitutive behavior of various viscoelastic materials including soft biological tissues. Compared to integer order constitutive relations, non-integer order viscoelastic material models of soft biological tissues are capable of capturing a wider range of viscoelastic behavior obtained from experiments. Although integer order models may yield comparably accurate results, non-integer order material models have less number of parameters to be identified in addition to description of an intermediate material that can monotonically and continuously be adjusted in between an ideal elastic solid and an ideal viscous fluid. In this work, starting with some preliminaries on non-integer (fractional) calculus, the "spring-pot", (intermediate mechanical element between a solid and a fluid), non-integer order three element (Zener) solid model, finally a user-defined large strain non-integer order viscoelastic constitutive model was constructed to be used in finite element simulations. Using the constitutive equation developed, by utilizing inverse finite element method and in vivo indentation experiments, soft tissue material identification was performed. The results indicate that material coefficients obtained from relaxation experiments, when optimized with creep experimental data could simulate relaxation, creep and cyclic loading and unloading experiments accurately. Non-integer calculus viscoelastic constitutive models, having physical interpretation and modeling experimental data accurately is a good alternative to classical phenomenological viscoelastic constitutive equations.

  7. Modeling apple surface temperature dynamics based on weather data.

    PubMed

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-10-27

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00-18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of "Fuji" apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management.

  8. Modeling Apple Surface Temperature Dynamics Based on Weather Data

    PubMed Central

    Li, Lei; Peters, Troy; Zhang, Qin; Zhang, Jingjin; Huang, Danfeng

    2014-01-01

    The exposure of fruit surfaces to direct sunlight during the summer months can result in sunburn damage. Losses due to sunburn damage are a major economic problem when marketing fresh apples. The objective of this study was to develop and validate a model for simulating fruit surface temperature (FST) dynamics based on energy balance and measured weather data. A series of weather data (air temperature, humidity, solar radiation, and wind speed) was recorded for seven hours between 11:00–18:00 for two months at fifteen minute intervals. To validate the model, the FSTs of “Fuji” apples were monitored using an infrared camera in a natural orchard environment. The FST dynamics were measured using a series of thermal images. For the apples that were completely exposed to the sun, the RMSE of the model for estimating FST was less than 2.0 °C. A sensitivity analysis of the emissivity of the apple surface and the conductance of the fruit surface to water vapour showed that accurate estimations of the apple surface emissivity were important for the model. The validation results showed that the model was capable of accurately describing the thermal performances of apples under different solar radiation intensities. Thus, this model could be used to more accurately estimate the FST relative to estimates that only consider the air temperature. In addition, this model provides useful information for sunburn protection management. PMID:25350507

  9. Forecasting of municipal solid waste quantity in a developing country using multivariate grey models.

    PubMed

    Intharathirat, Rotchana; Abdul Salam, P; Kumar, S; Untong, Akarapong

    2015-05-01

    In order to plan, manage and use municipal solid waste (MSW) in a sustainable way, accurate forecasting of MSW generation and composition plays a key role. It is difficult to carry out the reliable estimates using the existing models due to the limited data available in the developing countries. This study aims to forecast MSW collected in Thailand with prediction interval in long term period by using the optimized multivariate grey model which is the mathematical approach. For multivariate models, the representative factors of residential and commercial sectors affecting waste collected are identified, classified and quantified based on statistics and mathematics of grey system theory. Results show that GMC (1, 5), the grey model with convolution integral, is the most accurate with the least error of 1.16% MAPE. MSW collected would increase 1.40% per year from 43,435-44,994 tonnes per day in 2013 to 55,177-56,735 tonnes per day in 2030. This model also illustrates that population density is the most important factor affecting MSW collected, followed by urbanization, proportion employment and household size, respectively. These mean that the representative factors of commercial sector may affect more MSW collected than that of residential sector. Results can help decision makers to develop the measures and policies of waste management in long term period. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  11. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  12. Electricity Markets, Smart Grids and Smart Buildings

    NASA Astrophysics Data System (ADS)

    Falcey, Jonathan M.

    A smart grid is an electricity network that accommodates two-way power flows, and utilizes two-way communications and increased measurement, in order to provide more information to customers and aid in the development of a more efficient electricity market. The current electrical network is outdated and has many shortcomings relating to power flows, inefficient electricity markets, generation/supply balance, a lack of information for the consumer and insufficient consumer interaction with electricity markets. Many of these challenges can be addressed with a smart grid, but there remain significant barriers to the implementation of a smart grid. This paper proposes a novel method for the development of a smart grid utilizing a bottom up approach (starting with smart buildings/campuses) with the goal of providing the framework and infrastructure necessary for a smart grid instead of the more traditional approach (installing many smart meters and hoping a smart grid emerges). This novel approach involves combining deterministic and statistical methods in order to accurately estimate building electricity use down to the device level. It provides model users with a cheaper alternative to energy audits and extensive sensor networks (the current methods of quantifying electrical use at this level) which increases their ability to modify energy consumption and respond to price signals The results of this method are promising, but they are still preliminary. As a result, there is still room for improvement. On days when there were no missing or inaccurate data, this approach has R2 of about 0.84, sometimes as high as 0.94 when compared to measured results. However, there were many days where missing data brought overall accuracy down significantly. In addition, the development and implementation of the calibration process is still underway and some functional additions must be made in order to maximize accuracy. The calibration process must be completed before a reliable accuracy can be determined. While this work shows that a combination of a deterministic and statistical methods can accurately forecast building energy usage, the ability to produce accurate results is heavily dependent upon software availability, accurate data and the proper calibration of the model. Creating the software required for a smart building model is time consuming and expensive. Bad or missing data have significant negative impacts on the accuracy of the results and can be caused by a hodgepodge of equipment and communication protocols. Proper calibration of the model is essential to ensure that the device level estimations are sufficiently accurate. Any building model which is to be successful at creating a smart building must be able to overcome these challenges.

  13. Optimization and real-time control for laser treatment of heterogeneous soft tissues.

    PubMed

    Feng, Yusheng; Fuentes, David; Hawkins, Andrea; Bass, Jon M; Rylander, Marissa Nichole

    2009-01-01

    Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.

  14. A forecasting model for dengue incidence in the District of Gampaha, Sri Lanka.

    PubMed

    Withanage, Gayan P; Viswakula, Sameera D; Nilmini Silva Gunawardena, Y I; Hapugoda, Menaka D

    2018-04-24

    Dengue is one of the major health problems in Sri Lanka causing an enormous social and economic burden to the country. An accurate early warning system can enhance the efficiency of preventive measures. The aim of the study was to develop and validate a simple accurate forecasting model for the District of Gampaha, Sri Lanka. Three time-series regression models were developed using monthly rainfall, rainy days, temperature, humidity, wind speed and retrospective dengue incidences over the period January 2012 to November 2015 for the District of Gampaha, Sri Lanka. Various lag times were analyzed to identify optimum forecasting periods including interactions of multiple lags. The models were validated using epidemiological data from December 2015 to November 2017. Prepared models were compared based on Akaike's information criterion, Bayesian information criterion and residual analysis. The selected model forecasted correctly with mean absolute errors of 0.07 and 0.22, and root mean squared errors of 0.09 and 0.28, for training and validation periods, respectively. There were no dengue epidemics observed in the district during the training period and nine outbreaks occurred during the forecasting period. The proposed model captured five outbreaks and correctly rejected 14 within the testing period of 24 months. The Pierce skill score of the model was 0.49, with a receiver operating characteristic of 86% and 92% sensitivity. The developed weather based forecasting model allows warnings of impending dengue outbreaks and epidemics in advance of one month with high accuracy. Depending upon climatic factors, the previous month's dengue cases had a significant effect on the dengue incidences of the current month. The simple, precise and understandable forecasting model developed could be used to manage limited public health resources effectively for patient management, vector surveillance and intervention programmes in the district.

  15. The development of a classification system for maternity models of care.

    PubMed

    Donnolley, Natasha; Butler-Henderson, Kerryn; Chapman, Michael; Sullivan, Elizabeth

    2016-08-01

    A lack of standard terminology or means to identify and define models of maternity care in Australia has prevented accurate evaluations of outcomes for mothers and babies in different models of maternity care. As part of the Commonwealth-funded National Maternity Data Development Project, a classification system was developed utilising a data set specification that defines characteristics of models of maternity care. The Maternity Care Classification System or MaCCS was developed using a participatory action research design that built upon the published and grey literature. The study identified the characteristics that differentiate models of care and classifies models into eleven different Major Model Categories. The MaCCS will enable individual health services, local health districts (networks), jurisdictional and national health authorities to make better informed decisions for planning, policy development and delivery of maternity services in Australia. © The Author(s) 2016.

  16. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  17. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  18. A Sub-Millimetric 3-DOF Force Sensing Instrument with Integrated Fiber Bragg Grating for Retinal Microsurgery

    PubMed Central

    He, Xingchi; Handa, James; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2013-01-01

    Vitreoretinal surgery requires very fine motor control to perform precise manipulation of the delicate tissue in the interior of the eye. Besides physiological hand tremor, fatigue, poor kinesthetic feedback, and patient movement, the absence of force sensing is one of the main technical challenges. Previous two degrees of freedom (DOF) force sensing instruments have demonstrated robust force measuring performance. The main design challenge is to incorporate high sensitivity axial force sensing. This paper reports the development of a sub-millimetric 3-DOF force sensing pick instrument based on fiber Bragg grating (FBG) sensors. The configuration of the four FBG sensors is arranged to maximize the decoupling between axial and transverse force sensing. A super-elastic nitinol flexure is designed to achieve high axial force sensitivity. An automated calibration system was developed for repeatability testing, calibration, and validation. Experimental results demonstrate a FBG sensor repeatability of 1.3 pm. The linear model for calculating the transverse forces provides an accurate global estimate. While the linear model for axial force is only locally accurate within a conical region with a 30° vertex angle, a second-order polynomial model can provide a useful global estimate for axial force. Combining the linear model for transverse forces and nonlinear model for axial force, the 3-DOF force sensing instrument can provide sub-millinewton resolution for axial force and a quarter millinewton for transverse forces. Validation with random samples show the force sensor can provide consistent and accurate measurement of three dimensional forces. PMID:24108455

  19. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  20. SAE for the prediction of road traffic status from taxicab operating data and bus smart card data

    NASA Astrophysics Data System (ADS)

    Zhengfeng, Huang; Pengjun, Zheng; Wenjun, Xu; Gang, Ren

    Road traffic status is significant for trip decision and traffic management, and thus should be predicted accurately. A contribution is that we consider multi-modal data for traffic status prediction than only using single source data. With the substantial data from Ningbo Passenger Transport Management Sector (NPTMS), we wished to determine whether it was possible to develop Stacked Autoencoders (SAEs) for accurately predicting road traffic status from taxicab operating data and bus smart card data. We show that SAE performed better than linear regression model and Back Propagation (BP) neural network for determining the relationship between road traffic status and those factors. In a 26-month data experiment using SAE, we show that it is possible to develop highly accurate predictions (91% test accuracy) of road traffic status from daily taxicab operating data and bus smart card data.

  1. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  2. Development of Tripropellant CFD Design Code

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.

    1998-01-01

    A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.

  3. An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-06-01

    The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  4. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  5. High Accuracy Transistor Compact Model Calibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less

  6. Charge-dependent non-bonded interaction methods for use in quantum mechanical modeling of condensed phase reactions

    NASA Astrophysics Data System (ADS)

    Kuechler, Erich R.

    Molecular modeling and computer simulation techniques can provide detailed insight into biochemical phenomena. This dissertation describes the development, implementation and parameterization of two methods for the accurate modeling of chemical reactions in aqueous environments, with a concerted scientific effort towards the inclusion of charge-dependent non-bonded non-electrostatic interactions into currently used computational frameworks. The first of these models, QXD, modifies interactions in a hybrid quantum mechanical/molecular (QM/MM) mechanical framework to overcome the current limitations of 'atom typing' QM atoms; an inaccurate and non-intuitive practice for chemically active species as these static atom types are dictated by the local bonding and electrostatic environment of the atoms they represent, which will change over the course of the simulation. The efficacy QXD model is demonstrated using a specific reaction parameterization (SRP) of the Austin Model 1 (AM1) Hamiltonian by simultaneously capturing the reaction barrier for chloride ion attack on methylchloride in solution and the solvation free energies of a series of compounds including the reagents of the reaction. The second, VRSCOSMO, is an implicit solvation model for use with the DFTB3/3OB Hamiltonian for biochemical reactions; allowing for accurate modeling of ionic compound solvation properties while overcoming the discontinuous nature of conventional PCM models when chemical reaction coordinates. The VRSCOSMO model is shown to accurately model the solvation properties of over 200 chemical compounds while also providing smooth, continuous reaction surfaces for a series of biologically motivated phosphoryl transesterification reactions. Both of these methods incorporate charge-dependent behavior into the non-bonded interactions variationally, allowing the 'size' of atoms to change in meaningful ways with respect to changes in local charge state, as to provide an accurate, predictive and transferable models for the interactions between the quantum mechanical system and their solvated surroundings.

  7. MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research: Computer models that take account of body movements promise to provide evaluation and improvement of medical imaging devices and technology.

    PubMed

    Paul Segars, W; Tsui, Benjamin M W

    2009-12-01

    Recent work in the development of computerized phantoms has focused on the creation of ideal "hybrid" models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be manipulated to model anatomical variations and patient motion. With the vast improvement in realism, the phantoms developed in our lab can be combined with accurate models of the imaging process (SPECT, PET, CT, magnetic resonance imaging, and ultrasound) to generate simulated imaging data close to that from actual human or animal subjects. As such, they can provide vital tools to generate predictive imaging data from many different subjects under various scanning parameters from which to quantitatively evaluate and improve imaging devices and techniques. From the MCAT to XCAT, we will demonstrate how NURBS and SD surface modeling have resulted in a major evolutionary advance in the development of computerized phantoms for imaging research.

  8. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    PubMed

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.

  9. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  10. Dynamics modeling and adaptive control of flexible manipulators

    NASA Technical Reports Server (NTRS)

    Sasiadek, J. Z.

    1991-01-01

    An application of Model Reference Adaptive Control (MRAC) to the position and force control of flexible manipulators and robots is presented. A single-link flexible manipulator is analyzed. The problem was to develop a mathematical model of a flexible robot that is accurate. The objective is to show that the adaptive control works better than 'conventional' systems and is suitable for flexible structure control.

  11. Characterization of structural connections using free and forced response test data

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Huckelbridge, Arthur A.

    1989-01-01

    The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.

  12. Minimizing Segregation During the Controlled Directional Solidification of Dendritic Alloys Publication: Metallurgical and Materials Transactions

    NASA Technical Reports Server (NTRS)

    Grugel, R. N.; Fedoseyev, A. I.; Kim, S.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    Gravity-driven thermosolutal convection that arises during controlled directional solidification (DS) of dendritic alloys promotes detrimental macro-segregation (e.g. freckles and steepling) in products such as turbine blades. Considerable time and effort has been spent to experimentally and theoretically investigate this phenomena; although our knowledge has advanced to the point where convection can be modeled and accurately compared to experimental results, little has been done to minimize its onset and deleterious effects. The experimental work demonstrates that segregation can be. minimized and microstructural uniformity promoted when a slow axial rotation is applied to the sample crucible during controlled directional solidification processing. Numerical modeling utilizing continuation and bifurcation methods have been employed to develop accurate physical and mathematical models with the intent of identifying and optimizing processing parameters.

  13. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  14. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  15. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  16. Quantification of observed flare parameters in relation to a shear-index and verification of MHD models for flare prediction

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    1987-01-01

    The goal for the SAMEX magnetograph's optical system is to accurately measure the polarization state of sunlight in a narrow spectral bandwidth over the field of view of an active region to make an accurate determination of the magnetic field in that region. The instrumental polarization is characterized. The optics and coatings were designed to minimize this spurious polarization introduced by foreoptics. The method developed to calculate the instrumental polarization of the SAMEX optics is described.

  17. The Georgia Tech High Sensitivity Microwave Measurement System

    NASA Technical Reports Server (NTRS)

    Deboer, David R.; Steffes, Paul G.

    1996-01-01

    As observations and models of the planets become increasingly more accurate and sophisticated, the need for highly accurate laboratory measurements of the microwave properties of the component gases present in their atmospheres become ever more critical. This paper describes the system that has been developed at Georgia Tech to make these measurements at wavelengths ranging from 13.3 cm to 1.38 cm with a sensitivity of 0.05 dB/km at the longest wavelength and 0.6 db/km at the shortest wavelength.

  18. POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ştefănescu, R., E-mail: rstefane@vt.edu; Sandu, A., E-mail: sandu@cs.vt.edu; Navon, I.M., E-mail: inavon@fsu.edu

    2015-08-15

    This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush–Kuhn–Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov–Galerkin projections. For the first time POD, tensorialmore » POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov–Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.« less

  19. Modeling Interfacial Glass-Water Reactions: Recent Advances and Current Limitations

    DOE PAGES

    Pierce, Eric M.; Frugier, Pierre; Criscenti, Louise J.; ...

    2014-07-12

    Describing the reactions that occur at the glass-water interface and control the development of the altered layer constitutes one of the main scientific challenges impeding existing models from providing accurate radionuclide release estimates. Radionuclide release estimates are a critical component of the safety basis for geologic repositories. The altered layer (i.e., amorphous hydrated surface layer and crystalline reaction products) represents a complex region, both physically and chemically, sandwiched between two distinct boundaries pristine glass surface at the inner most interface and aqueous solution at the outer most interface. Computational models, spanning different length and time-scales, are currently being developed tomore » improve our understanding of this complex and dynamic process with the goal of accurately describing the pore-scale changes that occur as the system evolves. These modeling approaches include geochemical simulations [i.e., classical reaction path simulations and glass reactivity in allowance for alteration layer (GRAAL) simulations], Monte Carlo simulations, and Molecular Dynamics methods. Finally, in this manuscript, we discuss the advances and limitations of each modeling approach placed in the context of the glass-water reaction and how collectively these approaches provide insights into the mechanisms that control the formation and evolution of altered layers.« less

  20. Development and Testing of Building Energy Model Using Non-Linear Auto Regression Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Arida, Maya Ahmad

    In 1972 sustainable development concept existed and during The years it became one of the most important solution to save natural resources and energy, but now with rising energy costs and increasing awareness of the effect of global warming, the development of building energy saving methods and models become apparently more necessary for sustainable future. According to U.S. Energy Information Administration EIA (EIA), today buildings in the U.S. consume 72 percent of electricity produced, and use 55 percent of U.S. natural gas. Buildings account for about 40 percent of the energy consumed in the United States, more than industry and transportation. Of this energy, heating and cooling systems use about 55 percent. If energy-use trends continue, buildings will become the largest consumer of global energy by 2025. This thesis proposes procedures and analysis techniques for building energy system and optimization methods using time series auto regression artificial neural networks. The model predicts whole building energy consumptions as a function of four input variables, dry bulb and wet bulb outdoor air temperatures, hour of day and type of day. The proposed model and the optimization process are tested using data collected from an existing building located in Greensboro, NC. The testing results show that the model can capture very well the system performance, and The optimization method was also developed to automate the process of finding the best model structure that can produce the best accurate prediction against the actual data. The results show that the developed model can provide results sufficiently accurate for its use in various energy efficiency and saving estimation applications.

  1. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1982-01-01

    The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.

  2. A model of solar energtic particles for use in calculating LET spectra developed from ONR-604 data

    NASA Technical Reports Server (NTRS)

    Chen, J.; Chenette, D.; Guzik, T. G.; Garcia-Munoz, M.; Pyle, K. R.; Sang, Y.; Wefel, J. P.

    1994-01-01

    A model of Solar Energetic Particles (SEP) has been developed and is applied to solar flares during the 1990/1991 Combined Release and Radiation Effects Satellite (CRRES) mission using data measured by the University of Chicago instrument, ONR-604. The model includes the time-dependent behavior, heavy-ion content, energy spectrum and influence, and can accurately represent the observed SEP events in the energy range between 40 to 500 MeV/nucleon. Results are presented for the March and June, 1991 flare periods.

  3. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  4. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation.

    PubMed

    Bardhan, Jaydeep P; Knepley, Matthew G; Anitescu, Mihai

    2009-03-14

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  5. Bounding the electrostatic free energies associated with linear continuum models of molecular solvation

    NASA Astrophysics Data System (ADS)

    Bardhan, Jaydeep P.; Knepley, Matthew G.; Anitescu, Mihai

    2009-03-01

    The importance of electrostatic interactions in molecular biology has driven extensive research toward the development of accurate and efficient theoretical and computational models. Linear continuum electrostatic theory has been surprisingly successful, but the computational costs associated with solving the associated partial differential equations (PDEs) preclude the theory's use in most dynamical simulations. Modern generalized-Born models for electrostatics can reproduce PDE-based calculations to within a few percent and are extremely computationally efficient but do not always faithfully reproduce interactions between chemical groups. Recent work has shown that a boundary-integral-equation formulation of the PDE problem leads naturally to a new approach called boundary-integral-based electrostatics estimation (BIBEE) to approximate electrostatic interactions. In the present paper, we prove that the BIBEE method can be used to rigorously bound the actual continuum-theory electrostatic free energy. The bounds are validated using a set of more than 600 proteins. Detailed numerical results are presented for structures of the peptide met-enkephalin taken from a molecular-dynamics simulation. These bounds, in combination with our demonstration that the BIBEE methods accurately reproduce pairwise interactions, suggest a new approach toward building a highly accurate yet computationally tractable electrostatic model.

  6. Toward structure prediction of cyclic peptides.

    PubMed

    Yu, Hongtao; Lin, Yu-Shan

    2015-02-14

    Cyclic peptides are a promising class of molecules that can be used to target specific protein-protein interactions. A computational method to accurately predict their structures would substantially advance the development of cyclic peptides as modulators of protein-protein interactions. Here, we develop a computational method that integrates bias-exchange metadynamics simulations, a Boltzmann reweighting scheme, dihedral principal component analysis and a modified density peak-based cluster analysis to provide a converged structural description for cyclic peptides. Using this method, we evaluate the performance of a number of popular protein force fields on a model cyclic peptide. All the tested force fields seem to over-stabilize the α-helix and PPII/β regions in the Ramachandran plot, commonly populated by linear peptides and proteins. Our findings suggest that re-parameterization of a force field that well describes the full Ramachandran plot is necessary to accurately model cyclic peptides.

  7. Spectra of Full 3-D PIC Simulations of Finite Meteor Trails

    NASA Astrophysics Data System (ADS)

    Tarnecki, L. K.; Oppenheim, M. M.

    2016-12-01

    Radars detect plasma trails created by the billions of small meteors that impact the Earth's atmosphere daily, returning data used to infer characteristics of the meteoroid population and upper atmosphere. Researchers use models to investigate the dynamic evolution of the trails. Previously, all models assumed a trail of infinite length, due to the constraints of simulation techniques. We present the first simulations of 3D meteor trails of finite length. This change more accurately captures the physics of the trails. We characterize the turbulence that develops as the trail evolves and study the effects of varying the external electric field, altitude, and initial density. The simulations show that turbulence develops in all cases, and that trails travel with the neutral wind rather than electric field. Our results will allow us to draw more detailed and accurate information from non-specular radar observations of meteors.

  8. SnowyOwl: accurate prediction of fungal genes by using RNA-Seq and homology information to select among ab initio models

    PubMed Central

    2014-01-01

    Background Locating the protein-coding genes in novel genomes is essential to understanding and exploiting the genomic information but it is still difficult to accurately predict all the genes. The recent availability of detailed information about transcript structure from high-throughput sequencing of messenger RNA (RNA-Seq) delineates many expressed genes and promises increased accuracy in gene prediction. Computational gene predictors have been intensively developed for and tested in well-studied animal genomes. Hundreds of fungal genomes are now or will soon be sequenced. The differences of fungal genomes from animal genomes and the phylogenetic sparsity of well-studied fungi call for gene-prediction tools tailored to them. Results SnowyOwl is a new gene prediction pipeline that uses RNA-Seq data to train and provide hints for the generation of Hidden Markov Model (HMM)-based gene predictions and to evaluate the resulting models. The pipeline has been developed and streamlined by comparing its predictions to manually curated gene models in three fungal genomes and validated against the high-quality gene annotation of Neurospora crassa; SnowyOwl predicted N. crassa genes with 83% sensitivity and 65% specificity. SnowyOwl gains sensitivity by repeatedly running the HMM gene predictor Augustus with varied input parameters and selectivity by choosing the models with best homology to known proteins and best agreement with the RNA-Seq data. Conclusions SnowyOwl efficiently uses RNA-Seq data to produce accurate gene models in both well-studied and novel fungal genomes. The source code for the SnowyOwl pipeline (in Python) and a web interface (in PHP) is freely available from http://sourceforge.net/projects/snowyowl/. PMID:24980894

  9. Rapid Model Fabrication and Testing for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    2000-01-01

    Advanced methods for rapid fabrication and instrumentation of hypersonic wind tunnel models are being developed and evaluated at NASA Langley Research Center. Rapid aeroheating model fabrication and measurement techniques using investment casting of ceramic test models and thermographic phosphors are reviewed. More accurate model casting techniques for fabrication of benchmark metal and ceramic test models are being developed using a combination of rapid prototype patterns and investment casting. White light optical scanning is used for coordinate measurements to evaluate the fabrication process and verify model accuracy to +/- 0.002 inches. Higher-temperature (<210C) luminescent coatings are also being developed for simultaneous pressure and temperature mapping, providing global pressure as well as global aeroheating measurements. Together these techniques will provide a more rapid and complete experimental aerodynamic and aerothermodynamic database for future aerospace vehicles.

  10. The generation and use of numerical shape models for irregular Solar System objects

    NASA Technical Reports Server (NTRS)

    Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph

    1993-01-01

    We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.

  11. Thermal performance modeling of NASA s scientific balloons

    NASA Astrophysics Data System (ADS)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.

  12. Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices.

    PubMed

    Jeong, Jenny; Frohberg, Nicholas J; Zhou, Enlu; Sulchek, Todd; Qiu, Peng

    2018-01-01

    Microfluidics are routinely used to study cellular properties, including the efficient quantification of single-cell biomechanics and label-free cell sorting based on the biomechanical properties, such as elasticity, viscosity, stiffness, and adhesion. Both quantification and sorting applications require optimal design of the microfluidic devices and mathematical modeling of the interactions between cells, fluid, and the channel of the device. As a first step toward building such a mathematical model, we collected video recordings of cells moving through a ridged microfluidic channel designed to compress and redirect cells according to cell biomechanics. We developed an efficient algorithm that automatically and accurately tracked the cell trajectories in the recordings. We tested the algorithm on recordings of cells with different stiffness, and showed the correlation between cell stiffness and the tracked trajectories. Moreover, the tracking algorithm successfully picked up subtle differences of cell motion when passing through consecutive ridges. The algorithm for accurately tracking cell trajectories paves the way for future efforts of modeling the flow, forces, and dynamics of cell properties in microfluidics applications.

  13. MCAT to XCAT: The Evolution of 4-D Computerized Phantoms for Imaging Research

    PubMed Central

    Paul Segars, W.; Tsui, Benjamin M. W.

    2012-01-01

    Recent work in the development of computerized phantoms has focused on the creation of ideal “hybrid” models that seek to combine the realism of a patient-based voxelized phantom with the flexibility of a mathematical or stylized phantom. We have been leading the development of such computerized phantoms for use in medical imaging research. This paper will summarize our developments dating from the original four-dimensional (4-D) Mathematical Cardiac-Torso (MCAT) phantom, a stylized model based on geometric primitives, to the current 4-D extended Cardiac-Torso (XCAT) and Mouse Whole-Body (MOBY) phantoms, hybrid models of the human and laboratory mouse based on state-of-the-art computer graphics techniques. This paper illustrates the evolution of computerized phantoms toward more accurate models of anatomy and physiology. This evolution was catalyzed through the introduction of nonuniform rational b-spline (NURBS) and subdivision (SD) surfaces, tools widely used in computer graphics, as modeling primitives to define a more ideal hybrid phantom. With NURBS and SD surfaces as a basis, we progressed from a simple geometrically based model of the male torso (MCAT) containing only a handful of structures to detailed, whole-body models of the male and female (XCAT) anatomies (at different ages from newborn to adult), each containing more than 9000 structures. The techniques we applied for modeling the human body were similarly used in the creation of the 4-D MOBY phantom, a whole-body model for the mouse designed for small animal imaging research. From our work, we have found the NURBS and SD surface modeling techniques to be an efficient and flexible way to describe the anatomy and physiology for realistic phantoms. Based on imaging data, the surfaces can accurately model the complex organs and structures in the body, providing a level of realism comparable to that of a voxelized phantom. In addition, they are very flexible. Like stylized models, they can easily be manipulated to model anatomical variations and patient motion. With the vast improvement in realism, the phantoms developed in our lab can be combined with accurate models of the imaging process (SPECT, PET, CT, magnetic resonance imaging, and ultrasound) to generate simulated imaging data close to that from actual human or animal subjects. As such, they can provide vital tools to generate predictive imaging data from many different subjects under various scanning parameters from which to quantitatively evaluate and improve imaging devices and techniques. From the MCAT to XCAT, we will demonstrate how NURBS and SD surface modeling have resulted in a major evolutionary advance in the development of computerized phantoms for imaging research. PMID:26472880

  14. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    NASA Astrophysics Data System (ADS)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  15. Comparison of CdZnTe neutron detector models using MCNP6 and Geant4

    NASA Astrophysics Data System (ADS)

    Wilson, Emma; Anderson, Mike; Prendergasty, David; Cheneler, David

    2018-01-01

    The production of accurate detector models is of high importance in the development and use of detectors. Initially, MCNP and Geant were developed to specialise in neutral particle models and accelerator models, respectively; there is now a greater overlap of the capabilities of both, and it is therefore useful to produce comparative models to evaluate detector characteristics. In a collaboration between Lancaster University, UK, and Innovative Physics Ltd., UK, models have been developed in both MCNP6 and Geant4 of Cadmium Zinc Telluride (CdZnTe) detectors developed by Innovative Physics Ltd. Herein, a comparison is made of the relative strengths of MCNP6 and Geant4 for modelling neutron flux and secondary γ-ray emission. Given the increasing overlap of the modelling capabilities of MCNP6 and Geant4, it is worthwhile to comment on differences in results for simulations which have similarities in terms of geometries and source configurations.

  16. Progress in hypersonic turbulence modeling

    NASA Technical Reports Server (NTRS)

    Wilcox, David C.

    1991-01-01

    A compressibility modification is developed for k-omega (Wilcox, 1988) and k-epsilon (Jones and Launder, 1972) models, that is similar to those of Sarkar et al. (1989) and Zeman (1990). Results of the perturbation solution for the compressible wall layer demonstrate why the Sarkar and Zeman terms yield inaccurate skin friction for the flat-plate boundary layer. A new compressibility term is developed which permits accurate predictions of the compressible mixing layer, flat-plate boundary layer, and shock separated flows.

  17. Development of Methods to Evaluate Safer Flight Characteristics

    NASA Technical Reports Server (NTRS)

    Basciano, Thomas E., Jr.; Erickson, Jon D.

    1997-01-01

    The goal of the proposed research is to begin development of a simulation that models the flight characteristics of the Simplified Aid For EVA Rescue (SAFER) pack. Development of such a simulation was initiated to ultimately study the effect an Orbital Replacement Unit (ORU) has on SAFER dynamics. A major function of this program will be to calculate fuel consumption for many ORUs with different masses and locations. This will ultimately determine the maximum ORU mass an astronaut can carry and still perform a self-rescue without jettisoning the unit. A second primary goal is to eventually simulate relative motion (vibration) between the ORU and astronaut. After relative motion is accurately modeled it will be possible to evaluate the robustness of the control system and optimize performance as needed. The first stage in developing the simulation is the ability to model a standardized, total, self-rescue scenario, making it possible to accurately compare different program runs. In orbit an astronaut has only limited data and will not be able to follow the most fuel efficient trajectory; therefore, it is important to correctly model the procedures an astronaut would use in orbit so that good fuel consumption data can be obtained. Once this part of the program is well tested and verified, the vibration (relative motion) of the ORU with respect to the astronaut can be studied.

  18. Eigenspace techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Garrard, W. L.

    1982-01-01

    Mathematical models to be used in the control system design were developed. A computer program, which takes aerodynamic and structural data for the ARW-2 aircraft and converts these data into state space models suitable for use in modern control synthesis procedures, was developed. Reduced order models of inboard and outboard control surface actuator dynamics and a second order vertical wind gust model were developed. An analysis of the rigid body motion of the ARW-2 was conducted. The deletion of the aerodynamic lag states in the rigid body modes resulted in more accurate values for the eigenvalues associated with the plunge and pitch modes than were obtainable if the lag states were retained.

  19. Laboratory Investigations and Numerical Modeling of Loss Mechanisms in Sound Propagation In Sandy Sediments

    DTIC Science & Technology

    2008-09-30

    meeting of the Acoustical Society of America in Providence, RI [7]. In this model , the scalar form Biot’s poroelastic equations could be used since...Laboratory Investigations and Numerical Modeling of Loss Mechanisms in Sound Propagation in Sandy Sediments. Brian T. Hefner Applied...Number: N00014-05-1-0225 http://www.apl.washington.edu LONG-TERM GOALS To develop accurate models for high frequency sound propagation within

  20. Using a prescribed fire to test custom and standard fuel models for fire behaviour prediction in a non-native, grass-invaded tropical dry shrubland

    Treesearch

    Andrew D. Pierce; Sierra McDaniel; Mark Wasser; Alison Ainsworth; Creighton M. Litton; Christian P. Giardina; Susan Cordell; Ralf Ohlemuller

    2014-01-01

    Questions: Do fuel models developed for North American fuel types accurately represent fuel beds found in grass-invaded tropical shrublands? Do standard or custom fuel models for firebehavior models with in situ or RAWS measured fuel moistures affect the accuracy of predicted fire behavior in grass-invaded tropical shrublands? Location: Hawai’i Volcanoes National...

  1. Testing Software Development Project Productivity Model

    NASA Astrophysics Data System (ADS)

    Lipkin, Ilya

    Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.

  2. Modelling and fabrication of high-efficiency silicon solar cells

    NASA Astrophysics Data System (ADS)

    Rohatgi, A.; Smith, A. W.; Salami, J.

    1991-10-01

    This report covers the research conducted on modelling and development of high efficiency silicon solar cells during the period May 1989 to August 1990. First, considerable effort was devoted toward developing a ray tracing program for the photovoltaic community to quantify and optimize surface texturing for solar cells. Second, attempts were made to develop a hydrodynamic model for device simulation. Such a model is somewhat slower than drift-diffusion type models like PC-1D, but it can account for more physical phenomena in the device, such as hot carrier effects, temperature gradients, thermal diffusion, and lattice heat flow. In addition, Fermi-Dirac statistics have been incorporated into the model to deal with heavy doping effects more accurately. The third and final component of the research includes development of silicon cell fabrication capabilities and fabrication of high efficiency silicon cells.

  3. Quantitative volumetric imaging of normal, neoplastic and hyperplastic mouse prostate using ultrasound.

    PubMed

    Singh, Shalini; Pan, Chunliu; Wood, Ronald; Yeh, Chiuan-Ren; Yeh, Shuyuan; Sha, Kai; Krolewski, John J; Nastiuk, Kent L

    2015-09-21

    Genetically engineered mouse models are essential to the investigation of the molecular mechanisms underlying human prostate pathology and the effects of therapy on the diseased prostate. Serial in vivo volumetric imaging expands the scope and accuracy of experimental investigations of models of normal prostate physiology, benign prostatic hyperplasia and prostate cancer, which are otherwise limited by the anatomy of the mouse prostate. Moreover, accurate imaging of hyperplastic and tumorigenic prostates is now recognized as essential to rigorous pre-clinical trials of new therapies. Bioluminescent imaging has been widely used to determine prostate tumor size, but is semi-quantitative at best. Magnetic resonance imaging can determine prostate volume very accurately, but is expensive and has low throughput. We therefore sought to develop and implement a high throughput, low cost, and accurate serial imaging protocol for the mouse prostate. We developed a high frequency ultrasound imaging technique employing 3D reconstruction that allows rapid and precise assessment of mouse prostate volume. Wild-type mouse prostates were examined (n = 4) for reproducible baseline imaging, and treatment effects on volume were compared, and blinded data analyzed for intra- and inter-operator assessments of reproducibility by correlation and for Bland-Altman analysis. Examples of benign prostatic hyperplasia mouse model prostate (n = 2) and mouse prostate implantation of orthotopic human prostate cancer tumor and its growth (n =  ) are also demonstrated. Serial measurement volume of the mouse prostate revealed that high frequency ultrasound was very precise. Following endocrine manipulation, regression and regrowth of the prostate could be monitored with very low intra- and interobserver variability. This technique was also valuable to monitor the development of prostate growth in a model of benign prostatic hyperplasia. Additionally, we demonstrate accurate ultrasound image-guided implantation of orthotopic tumor xenografts and monitoring of subsequent tumor growth from ~10 to ~750 mm(3) volume. High frequency ultrasound imaging allows precise determination of normal, neoplastic and hyperplastic mouse prostate. Low cost and small image size allows incorporation of this imaging modality inside clean animal facilities, and thereby imaging of immunocompromised models. 3D reconstruction for volume determination is easily mastered, and both small and large relative changes in volume are accurately visualized. Ultrasound imaging does not rely on penetration of exogenous imaging agents, and so may therefore better measure poorly vascularized or necrotic diseased tissue, relative to bioluminescent imaging (IVIS). Our method is precise and reproducible with very low inter- and intra-observer variability. Because it is non-invasive, mouse models of prostatic disease states can be imaged serially, reducing inter-animal variability, and enhancing the power to detect small volume changes following therapeutic intervention.

  4. An adaptive discontinuous Galerkin solver for aerodynamic flows

    NASA Astrophysics Data System (ADS)

    Burgess, Nicholas K.

    This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.

  5. Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project

    NASA Technical Reports Server (NTRS)

    Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.

    2014-01-01

    NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.

  6. Inhibition: Mental Control Process or Mental Resource?

    ERIC Educational Resources Information Center

    Im-Bolter, Nancie; Johnson, Janice; Ling, Daphne; Pascual-Leone, Juan

    2015-01-01

    The current study tested 2 models of inhibition in 45 children with language impairment and 45 children with normally developing language; children were aged 7 to 12 years. Of interest was whether a model of inhibition as a mental-control process (i.e., executive function) or as a mental resource would more accurately reflect the relations among…

  7. Mapping ecological systems with a random foret model: tradeoffs between errors and bias

    Treesearch

    Emilie Grossmann; Janet Ohmann; James Kagan; Heather May; Matthew Gregory

    2010-01-01

    New methods for predictive vegetation mapping allow improved estimations of plant community composition across large regions. Random Forest (RF) models limit over-fitting problems of other methods, and are known for making accurate classification predictions from noisy, nonnormal data, but can be biased when plot samples are unbalanced. We developed two contrasting...

  8. Development of a station based climate database for SWAT and APEX assessments in the U.S.

    USDA-ARS?s Scientific Manuscript database

    Water quality simulation models such as the Soil and Water Assessment Tool (SWAT) and Agricultural Policy EXtender (APEX) are widely used in the U.S. These models require large amounts of spatial and tabular data to simulate the natural world. Accurate and seamless daily climatic data are critical...

  9. Modeling small cell lung cancer (SCLC) biology through deterministic and stochastic mathematical models.

    PubMed

    Salgia, Ravi; Mambetsariev, Isa; Hewelt, Blake; Achuthan, Srisairam; Li, Haiqing; Poroyko, Valeriy; Wang, Yingyu; Sattler, Martin

    2018-05-25

    Mathematical cancer models are immensely powerful tools that are based in part on the fractal nature of biological structures, such as the geometry of the lung. Cancers of the lung provide an opportune model to develop and apply algorithms that capture changes and disease phenotypes. We reviewed mathematical models that have been developed for biological sciences and applied them in the context of small cell lung cancer (SCLC) growth, mutational heterogeneity, and mechanisms of metastasis. The ultimate goal is to develop the stochastic and deterministic nature of this disease, to link this comprehensive set of tools back to its fractalness and to provide a platform for accurate biomarker development. These techniques may be particularly useful in the context of drug development research, such as combination with existing omics approaches. The integration of these tools will be important to further understand the biology of SCLC and ultimately develop novel therapeutics.

  10. Propellant Chemistry for CFD Applications

    NASA Technical Reports Server (NTRS)

    Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.

    1996-01-01

    Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.

  11. Automatic segmentation of bones from digital hand radiographs

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia

    1995-05-01

    The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.

  12. Towards Assessing the Human Trajectory Planning Horizon

    PubMed Central

    Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk

    2016-01-01

    Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models. PMID:27936015

  13. Towards Assessing the Human Trajectory Planning Horizon.

    PubMed

    Carton, Daniel; Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk

    2016-01-01

    Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models.

  14. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  15. Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.

    PubMed

    Fu, Michael J; Cavuşoğlu, M Cenk

    2012-12-01

    Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.

  16. Modeling and forecasting US presidential election using learning algorithms

    NASA Astrophysics Data System (ADS)

    Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.

    2017-09-01

    The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.

  17. Customer-experienced rapid prototyping

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Zhang, Fu; Li, Anbo

    2008-12-01

    In order to describe accurately and comprehend quickly the perfect GIS requirements, this article will integrate the ideas of QFD (Quality Function Deployment) and UML (Unified Modeling Language), and analyze the deficiency of prototype development model, and will propose the idea of the Customer-Experienced Rapid Prototyping (CE-RP) and describe in detail the process and framework of the CE-RP, from the angle of the characteristics of Modern-GIS. The CE-RP is mainly composed of Customer Tool-Sets (CTS), Developer Tool-Sets (DTS) and Barrier-Free Semantic Interpreter (BF-SI) and performed by two roles of customer and developer. The main purpose of the CE-RP is to produce the unified and authorized requirements data models between customer and software developer.

  18. [A new method of processing quantitative PCR data].

    PubMed

    Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun

    2003-05-01

    Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.

  19. Ballistics and anatomical modelling - A review.

    PubMed

    Humphrey, Caitlin; Kumaratilake, Jaliya

    2016-11-01

    Ballistics is the study of a projectiles motion and can be broken down into four stages: internal, intermediate, external and terminal ballistics. The study of the effects a projectile has on a living tissue is referred to as wound ballistics and falls within terminal ballistics. To understand the effects a projectile has on living tissues the mechanisms of wounding need to be understood. These include the permanent and temporary cavities, energy, yawing, tumbling and fragmenting. Much ballistics research has been conducted including using cadavers, animal models and simulants such as ballistics ordnance gelatine. Further research is being conducted into developing anatomical, 3D, experimental and computational models. However, these models need to accurately represent the human body and its heterogeneous nature which involves understanding the biomechanical properties of the different tissues and organs. Further research is needed to accurately represent the human tissues with simulants and is slowly being conducted. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Measuring Value-at-Risk and Expected Shortfall of crude oil portfolio using extreme value theory and vine copula

    NASA Astrophysics Data System (ADS)

    Yu, Wenhua; Yang, Kun; Wei, Yu; Lei, Likun

    2018-01-01

    Volatilities of crude oil price have important impacts on the steady and sustainable development of world real economy. Thus it is of great academic and practical significance to model and measure the volatility and risk of crude oil markets accurately. This paper aims to measure the Value-at-Risk (VaR) and Expected Shortfall (ES) of a portfolio consists of four crude oil assets by using GARCH-type models, extreme value theory (EVT) and vine copulas. The backtesting results show that the combination of GARCH-type-EVT models and vine copula methods can produce accurate risk measures of the oil portfolio. Mixed R-vine copula is more flexible and superior to other vine copulas. Different GARCH-type models, which can depict the long-memory and/or leverage effect of oil price volatilities, however offer similar marginal distributions of the oil returns.

  1. A review of virtual cutting methods and technology in deformable objects.

    PubMed

    Wang, Monan; Ma, Yuzheng

    2018-06-05

    Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.

  2. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    PubMed Central

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  3. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    PubMed

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  4. An investigation of the effects of measurement noise in the use of instantaneous angular speed for machine diagnosis

    NASA Astrophysics Data System (ADS)

    Gu, Fengshou; Yesilyurt, Isa; Li, Yuhua; Harris, Georgina; Ball, Andrew

    2006-08-01

    In order to discriminate small changes for early fault diagnosis of rotating machines, condition monitoring demands that the measurement of instantaneous angular speed (IAS) of the machines be as accurate as possible. This paper develops the theoretical basis and practical implementation of IAS data acquisition and IAS estimation when noise influence is included. IAS data is modelled as a frequency modulated signal of which the signal-to-noise ratio can be improved by using a high-resolution encoder. From this signal model and analysis, optimal configurations for IAS data collection are addressed for high accuracy IAS measurement. Simultaneously, a method based on analytic signal concept and fast Fourier transform is also developed for efficient and accurate estimation of IAS. Finally, a fault diagnosis is carried out on an electric induction motor driving system using IAS measurement. The diagnosis results show that using a high-resolution encoder and a long data stream can achieve noise reduction by more than 10 dB in the frequency range of interest, validating the model and algorithm developed. Moreover, the results demonstrate that IAS measurement outperforms conventional vibration in diagnosis of incipient faults of motor rotor bar defects and shaft misalignment.

  5. Characterizing Topology of Probabilistic Biological Networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  6. Development of a Remote Accessibility Assessment System through three-dimensional reconstruction technology.

    PubMed

    Kim, Jong Bae; Brienza, David M

    2006-01-01

    A Remote Accessibility Assessment System (RAAS) that uses three-dimensional (3-D) reconstruction technology is being developed; it enables clinicians to assess the wheelchair accessibility of users' built environments from a remote location. The RAAS uses commercial software to construct 3-D virtualized environments from photographs. We developed custom screening algorithms and instruments for analyzing accessibility. Characteristics of the camera and 3-D reconstruction software chosen for the system significantly affect its overall reliability. In this study, we performed an accuracy assessment to verify that commercial hardware and software can construct accurate 3-D models by analyzing the accuracy of dimensional measurements in a virtual environment and a comparison of dimensional measurements from 3-D models created with four cameras/settings. Based on these two analyses, we were able to specify a consumer-grade digital camera and PhotoModeler (EOS Systems, Inc, Vancouver, Canada) software for this system. Finally, we performed a feasibility analysis of the system in an actual environment to evaluate its ability to assess the accessibility of a wheelchair user's typical built environment. The field test resulted in an accurate accessibility assessment and thus validated our system.

  7. Revisiting low-fidelity two-fluid models for gas-solids transport

    NASA Astrophysics Data System (ADS)

    Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus

    2016-08-01

    Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.

  8. QSPR models for half-wave reduction potential of steroids: a comparative study between feature selection and feature extraction from subsets of or entire set of descriptors.

    PubMed

    Hemmateenejad, Bahram; Yazdani, Mahdieh

    2009-02-16

    Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.

  9. A transparent model of the human scala tympani cavity.

    PubMed

    Rebscher, S J; Talbot, N; Bruszewski, W; Heilmann, M; Brasell, J; Merzenich, M M

    1996-01-01

    A dimensionally accurate clear model of the human scala tympani has been produced to evaluate the insertion and position of clinically applied intracochlear electrodes for electrical stimulation. Replicates of the human scala tympani were made from low melting point metal alloy (LMA) and from polymethylmeth-acrylate (PMMA) resin. The LMA metal casts were embedded in blocks of epoxy and in clear silicone rubber. After removal of the metal alloy, a cavity was produced that accurately models the human scala tympani. Investment casting molds were made from the PMMA scala tympani casts to enable production of multiple LMA casts from which identical models were fabricated. Total dimensional distortion of the LMA casting process was less than 1% in length and 2% in diameter. The models have been successfully integrated into the design process for the iterative development of advanced intracochlear electrode arrays at UCSF. These fabrication techniques are applicable to a wide range of biomedical design problems that require modelling of visually obscured cavities.

  10. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  11. Modelling pH evolution and lactic acid production in the growth medium of a lactic acid bacterium: application to set a biological TTI.

    PubMed

    Ellouze, M; Pichaud, M; Bonaiti, C; Coroller, L; Couvert, O; Thuault, D; Vaillant, R

    2008-11-30

    Time temperature integrators or indicators (TTIs) are effective tools making the continuous monitoring of the time temperature history of chilled products possible throughout the cold chain. Their correct setting is of critical importance to ensure food quality. The objective of this study was to develop a model to facilitate accurate settings of the CRYOLOG biological TTI, TRACEO. Experimental designs were used to investigate and model the effects of the temperature, the TTI inoculum size, pH, and water activity on its response time. The modelling process went through several steps addressing growth, acidification and inhibition phenomena in dynamic conditions. The model showed satisfactory results and validations in industrial conditions gave clear evidence that such a model is a valuable tool, not only to predict accurate response times of TRACEO, but also to propose precise settings to manufacture the appropriate TTI to trace a particular food according to a given time temperature scenario.

  12. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  13. Numerical modeling of turbulent swirling flow in a multi-inlet vortex nanoprecipitation reactor using dynamic DDES

    NASA Astrophysics Data System (ADS)

    Hill, James C.; Liu, Zhenping; Fox, Rodney O.; Passalacqua, Alberto; Olsen, Michael G.

    2015-11-01

    The multi-inlet vortex reactor (MIVR) has been developed to provide a platform for rapid mixing in the application of flash nanoprecipitation (FNP) for manufacturing functional nanoparticles. Unfortunately, commonly used RANS methods are unable to accurately model this complex swirling flow. Large eddy simulations have also been problematic, as expensive fine grids to accurately model the flow are required. These dilemmas led to the strategy of applying a Delayed Detached Eddy Simulation (DDES) method to the vortex reactor. In the current work, the turbulent swirling flow inside a scaled-up MIVR has been investigated by using a dynamic DDES model. In the DDES model, the eddy viscosity has a form similar to the Smagorinsky sub-grid viscosity in LES and allows the implementation of a dynamic procedure to determine its coefficient. The complex recirculating back flow near the reactor center has been successfully captured by using this dynamic DDES model. Moreover, the simulation results are found to agree with experimental data for mean velocity and Reynolds stresses.

  14. Patient-specific radiation dose and cancer risk estimation in CT: Part I. Development and validation of a Monte Carlo program

    PubMed Central

    Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.

    2011-01-01

    Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients. PMID:21361208

  15. Cerebral Aneurysm Clipping Surgery Simulation Using Patient-Specific 3D Printing and Silicone Casting.

    PubMed

    Ryan, Justin R; Almefty, Kaith K; Nakaji, Peter; Frakes, David H

    2016-04-01

    Neurosurgery simulator development is growing as practitioners recognize the need for improved instructional and rehearsal platforms to improve procedural skills and patient care. In addition, changes in practice patterns have decreased the volume of specific cases, such as aneurysm clippings, which reduces the opportunity for operating room experience. The authors developed a hands-on, dimensionally accurate model for aneurysm clipping using patient-derived anatomic data and three-dimensional (3D) printing. Design of the model focused on reproducibility as well as adaptability to new patient geometry. A modular, reproducible, and patient-derived medical simulacrum was developed for medical learners to practice aneurysmal clipping procedures. Various forms of 3D printing were used to develop a geometrically accurate cranium and vascular tree featuring 9 patient-derived aneurysms. 3D printing in conjunction with elastomeric casting was leveraged to achieve a patient-derived brain model with tactile properties not yet available from commercial 3D printing technology. An educational pilot study was performed to gauge simulation efficacy. Through the novel manufacturing process, a patient-derived simulacrum was developed for neurovascular surgical simulation. A follow-up qualitative study suggests potential to enhance current educational programs; assessments support the efficacy of the simulacrum. The proposed aneurysm clipping simulator has the potential to improve learning experiences in surgical environment. 3D printing and elastomeric casting can produce patient-derived models for a dynamic learning environment that add value to surgical training and preparation. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  17. Protein structure modeling for CASP10 by multiple layers of global optimization.

    PubMed

    Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung

    2014-02-01

    In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.

  18. Creating a Structurally Realistic Finite Element Geometric Model of a Cardiomyocyte to Study the Role of Cellular Architecture in Cardiomyocyte Systems Biology.

    PubMed

    Rajagopal, Vijay; Bass, Gregory; Ghosh, Shouryadipta; Hunt, Hilary; Walker, Cameron; Hanssen, Eric; Crampin, Edmund; Soeller, Christian

    2018-04-18

    With the advent of three-dimensional (3D) imaging technologies such as electron tomography, serial-block-face scanning electron microscopy and confocal microscopy, the scientific community has unprecedented access to large datasets at sub-micrometer resolution that characterize the architectural remodeling that accompanies changes in cardiomyocyte function in health and disease. However, these datasets have been under-utilized for investigating the role of cellular architecture remodeling in cardiomyocyte function. The purpose of this protocol is to outline how to create an accurate finite element model of a cardiomyocyte using high resolution electron microscopy and confocal microscopy images. A detailed and accurate model of cellular architecture has significant potential to provide new insights into cardiomyocyte biology, more than experiments alone can garner. The power of this method lies in its ability to computationally fuse information from two disparate imaging modalities of cardiomyocyte ultrastructure to develop one unified and detailed model of the cardiomyocyte. This protocol outlines steps to integrate electron tomography and confocal microscopy images of adult male Wistar (name for a specific breed of albino rat) rat cardiomyocytes to develop a half-sarcomere finite element model of the cardiomyocyte. The procedure generates a 3D finite element model that contains an accurate, high-resolution depiction (on the order of ~35 nm) of the distribution of mitochondria, myofibrils and ryanodine receptor clusters that release the necessary calcium for cardiomyocyte contraction from the sarcoplasmic reticular network (SR) into the myofibril and cytosolic compartment. The model generated here as an illustration does not incorporate details of the transverse-tubule architecture or the sarcoplasmic reticular network and is therefore a minimal model of the cardiomyocyte. Nevertheless, the model can already be applied in simulation-based investigations into the role of cell structure in calcium signaling and mitochondrial bioenergetics, which is illustrated and discussed using two case studies that are presented following the detailed protocol.

  19. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F.

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two typesmore » of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.« less

  20. Developing a stochastic traffic volume prediction model for public-private partnership projects

    NASA Astrophysics Data System (ADS)

    Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu

    2017-11-01

    Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.

Top