Puleo, J.A.; Mouraenko, O.; Hanes, D.M.
2004-01-01
Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.
New realisation of Preisach model using adaptive polynomial approximation
NASA Astrophysics Data System (ADS)
Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young
2012-09-01
Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Methods to estimate irrigated reference crop evapotranspiration - a review.
Kumar, R; Jat, M K; Shankar, V
2012-01-01
Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.
Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys
2012-01-01
constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on structural components made of high...different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH ) to accurately simulate fragment impact on...ADDRESS(ES) Naval Surface Warfare Center,4104Evans Way Suite 102,Indian Head,MD,20640 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING
Getting a Picture that Is Both Accurate and Stable: Situation Models and Epistemic Validation
ERIC Educational Resources Information Center
Schroeder, Sascha; Richter, Tobias; Hoever, Inga
2008-01-01
Text comprehension entails the construction of a situation model that prepares individuals for situated action. In order to meet this function, situation model representations are required to be both accurate and stable. We propose a framework according to which comprehenders rely on epistemic validation to prevent inaccurate information from…
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Physiological motion modeling for organ-mounted robots.
Wood, Nathan A; Schwartzman, David; Zenati, Marco A; Riviere, Cameron N
2017-12-01
Organ-mounted robots passively compensate heartbeat and respiratory motion. In model-guided procedures, this motion can be a significant source of information that can be used to aid in localization or to add dynamic information to static preoperative maps. Models for estimating periodic motion are proposed for both position and orientation. These models are then tested on animal data and optimal orders are identified. Finally, methods for online identification are demonstrated. Models using exponential coordinates and Euler-angle parameterizations are as accurate as models using quaternion representations, yet require a quarter fewer parameters. Models which incorporate more than four cardiac or three respiration harmonics are no more accurate. Finally, online methods estimate model parameters as accurately as offline methods within three respiration cycles. These methods provide a complete framework for accurately modelling the periodic deformation of points anywhere on the surface of the heart in a closed chest. Copyright © 2017 John Wiley & Sons, Ltd.
2015-03-26
Statement It is very difficult to obtain and use spectral BRDFs due to aforementioned reasons, while the need to accurately model the spectral and...Lambertian and MERL nickel-shaped BRDF models (Butler, 2014:1- 3 10), suggesting that accurate BRDFs are required to account for the significance of... BRDF -based radiative transfer models . Research Objectives /Hypotheses The need to represent the spectral reflected radiance of a material using
42 CFR § 512.605 - Waiver of certain telehealth requirements.
Code of Federal Regulations, 2010 CFR
2017-10-01
... AND HUMAN SERVICES (CONTINUED) HEALTH CARE INFRASTRUCTURE AND MODEL PROGRAMS EPISODE PAYMENT MODEL... distant site payment for telehealth home visit HCPCS codes unique to this model to more accurately reflect... accordance with § 512.210. (c) Waiver of selected payment provisions. (1) CMS waives the payment requirements...
Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data
NASA Technical Reports Server (NTRS)
Brasunas, J.; Mamoutkine, A.; Gorius, N.
2016-01-01
Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
System Identification and Verification of Rotorcraft UAVs
NASA Astrophysics Data System (ADS)
Carlton, Zachary M.
The task of a controls engineer is to design and implement control logic. To complete this task, it helps tremendously to have an accurate model of the system to be controlled. Obtaining a very accurate system model is not a trivial one, as much time and money is usually associated with the development of such a model. A typical physics based approach can require hundreds of hours of flight time. In an iterative process the model is tuned in such a way that it accurately models the physical system's response. This process becomes even more complicated for unstable and highly non-linear systems such as the dynamics of rotorcraft. An alternate approach to solving this problem is to extract an accurate model by analyzing the frequency response of the system. This process involves recording the system's responses for a frequency range of input excitations. From this data, an accurate system model can then be deduced. Furthermore, it has been shown that with use of the software package CIFER® (Comprehensive Identification from FrEquency Responses), this process can both greatly reduce the cost of modeling a dynamic system and produce very accurate results. The topic of this thesis is to apply CIFER® to a quadcopter to extract a system model for the flight condition of hover. The quadcopter itself is comprised of off-the-shelf components with a Pixhack flight controller board running open source Ardupilot controller logic. In this thesis, both the closed and open loop systems are identified. The model is next compared to dissimilar flight data and verified in the time domain. Additionally, the ESC (Electronic Speed Controller) motor/rotor subsystem, which is comprised of all the vehicle's actuators, is also identified. This process required the development of a test bench environment, which included a GUI (Graphical User Interface), data pre and post processing, as well as the augmentation of the flight controller source code. This augmentation of code allowed for proper data logging rates of all needed parameters.
An Energy-Based Hysteresis Model for Magnetostrictive Transducers
NASA Technical Reports Server (NTRS)
Calkins, F. T.; Smith, R. C.; Flatau, A. B.
1997-01-01
This paper addresses the modeling of hysteresis in magnetostrictive transducers. This is considered in the context of control applications which require an accurate characterization of the relation between input currents and strains output by the transducer. This relation typically exhibits significant nonlinearities and hysteresis due to inherent properties of magnetostrictive materials. The characterization considered here is based upon the Jiles-Atherton mean field model for ferromagnetic hysteresis in combination with a quadratic moment rotation model for magnetostriction. As demonstrated through comparison with experimental data, the magnetization model very adequately quantifies both major and minor loops under various operating conditions. The combined model can then be used to accurately characterize output strains at moderate drive levels. The advantages to this model lie in the small number (six) of required parameters and the flexibility it exhibits in a variety of operating conditions.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Subthreshold SPICE Model Optimization
NASA Astrophysics Data System (ADS)
Lum, Gregory; Au, Henry; Neff, Joseph; Bozeman, Eric; Kamin, Nick; Shimabukuro, Randy
2011-04-01
The first step in integrated circuit design is the simulation of said design in software to verify proper functionally and design requirements. Properties of the process are provided by fabrication foundries in the form of SPICE models. These SPICE models contain the electrical data and physical properties of the basic circuit elements. A limitation of these models is that the data collected by the foundry only accurately model the saturation region. This is fine for most users, but when operating devices in the subthreshold region they are inadequate for accurate simulation results. This is why optimizing the current SPICE models to characterize the subthreshold region is so important. In order to accurately simulate this region of operation, MOSFETs of varying widths and lengths are fabricated and the electrical test data is collected. From the data collected the parameters of the model files are optimized through parameter extraction rather than curve fitting. With the completed optimized models the circuit designer is able to simulate circuit designs for the sub threshold region accurately.
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen; Ryan, James S.
1987-01-01
While the zonal grid system of Transonic Navier-Stokes (TNS) provides excellent modeling of complex geometries, improved shock capturing, and a higher Mach number range will be required if flows about hypersonic aircraft are to be modeled accurately. A computational fluid dynamics (CFD) code, the Compressible Navier-Stokes (CNS), is under development to combine the required high Mach number capability with the existing TNS geometry capability. One of several candidate flow solvers for inclusion in the CNS is that of F3D. This upwinding flow solver promises improved shock capturing, and more accurate hypersonic solutions overall, compared to the solver currently used in TNS.
Space mapping method for the design of passive shields
NASA Astrophysics Data System (ADS)
Sergeant, Peter; Dupré, Luc; Melkebeek, Jan
2006-04-01
The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.
A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.
2006-01-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.
Sexing California gulls using morphometrics and discriminant function analysis
Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.
2010-01-01
A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.
Accuracy of Binary Black Hole Waveform Models for Advanced LIGO
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team
2016-03-01
Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Luo, Mei; Wang, Hao; Lyu, Zhi
2017-12-01
Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
A comparative study on different methods of automatic mesh generation of human femurs.
Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A
1998-01-01
The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.
METEOROLOGICAL AND TRANSPORT MODELING
Advanced air quality simulation models, such as CMAQ, as well as other transport and dispersion models, require accurate and detailed meteorology fields. These meteorology fields include primary 3-dimensional dynamical and thermodynamical variables (e.g., winds, temperature, mo...
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel
2015-04-01
With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.
On the Development of Parameterized Linear Analytical Longitudinal Airship Models
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.
2008-01-01
In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
MECHANISTIC DOSIMETRY MODELS OF NANOMATERIAL DEPOSITION IN THE RESPIRATORY TRACT
Accurate health risk assessments of inhalation exposure to nanomaterials will require dosimetry models that account for interspecies differences in dose delivered to the respiratory tract. Mechanistic models offer the advantage to interspecies extrapolation that physicochemica...
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Edwards, Joel; Othman, Maazuza; Burn, Stewart; Crossin, Enda
2016-10-01
The collection of source separated kerbside municipal FW (SSFW) is being incentivised in Australia, however such a collection is likely to increase the fuel and time a collection truck fleet requires. Therefore, waste managers need to determine whether the incentives outweigh the cost. With literature scarcely describing the magnitude of increase, and local parameters playing a crucial role in accurately modelling kerbside collection; this paper develops a new general mathematical model that predicts the energy and time requirements of a collection regime whilst incorporating the unique variables of different jurisdictions. The model, Municipal solid waste collect (MSW-Collect), is validated and shown to be more accurate at predicting fuel consumption and trucks required than other common collection models. When predicting changes incurred for five different SSFW collection scenarios, results show that SSFW scenarios require an increase in fuel ranging from 1.38% to 57.59%. There is also a need for additional trucks across most SSFW scenarios tested. All SSFW scenarios are ranked and analysed in regards to fuel consumption; sensitivity analysis is conducted to test key assumptions. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
Designing Control System Application Software for Change
NASA Technical Reports Server (NTRS)
Boulanger, Richard
2001-01-01
The Unified Modeling Language (UML) was used to design the Environmental Systems Test Stand (ESTS) control system software. The UML was chosen for its ability to facilitate a clear dialog between software designer and customer, from which requirements are discovered and documented in a manner which transposes directly to program objects. Applying the UML to control system software design has resulted in a baseline set of documents from which change and effort of that change can be accurately measured. As the Environmental Systems Test Stand evolves, accurate estimates of the time and effort required to change the control system software will be made. Accurate quantification of the cost of software change can be before implementation, improving schedule and budget accuracy.
Simulating immersed particle collisions: the Devil's in the details
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2015-11-01
Simulating densely-packed particle-laden flows with any degree of confidence requires accurate modeling of particle-particle collisions. To this end, we investigate a few collision models from the fluids and granular flow communities using sphere-wall collisions, which have been studied by a number of experimental groups. These collisions involve enough complexities--gravity, particle-wall lubrication forces, particle-wall contact stresses, particle-wake interactions--to challenge any collision model. Evaluating the successes and shortcomings of the collision models, we seek improvements in order to obtain more consistent results. We will highlight several implementation details that are crucial for obtaining accurate results.
Developing a case-mix model for PPS.
Goldberg, H B; Delargy, D
2000-01-01
Agencies are pinning hopes for success under PPS on an accurate case-mix adjustor. The Health Care Financing Administration (HCFA) tasked Abt Associates Inc. to develop a system to accurately predict the volume and type of home health services each patient requires, based on his or her characteristics (not the service actually received). HCFA wanted this system to be feasible, clinically logical, and valid and accurate. Authors Goldberg and Delargy explain how Abt approached this daunting task.
On modeling pressure diffusion in non-homogeneous shear flows
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Rogers, M. M.; Durbin, P.; Lele, S. K.
1996-01-01
New models are proposed for the 'slow and 'rapid' parts of the pressure diffusive transport based on the examination of DNS databases for plane mixing layers and wakes. The model for the 'slow' part is non-local, but requires the distribution of the triple-velocity correlation as a local source. The latter can be computed accurately for the normal component from standard gradient diffusion models, but such models are inadequate for the cross component. More work is required to remedy this situation.
Two dimensional model for coherent synchrotron radiation
NASA Astrophysics Data System (ADS)
Huang, Chengkun; Kwan, Thomas J. T.; Carlsten, Bruce E.
2013-01-01
Understanding coherent synchrotron radiation (CSR) effects in a bunch compressor requires an accurate model accounting for the realistic beam shape and parameters. We extend the well-known 1D CSR analytic model into two dimensions and develop a simple numerical model based on the Liénard-Wiechert formula for the CSR field of a coasting beam. This CSR numerical model includes the 2D spatial dependence of the field in the bending plane and is accurate for arbitrary beam energy. It also removes the singularity in the space charge field calculation present in a 1D model. Good agreement is obtained with 1D CSR analytic result for free electron laser (FEL) related beam parameters but it can also give a more accurate result for low-energy/large spot size beams and off-axis/transient fields. This 2D CSR model can be used for understanding the limitation of various 1D models and for benchmarking fully electromagnetic multidimensional particle-in-cell simulations for self-consistent CSR modeling.
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
2015-10-30
accurately follow the development of the Black Hawk helicopters , a single main rotor model in NDARC that accurately represented the UH-60A is required. NDARC...Weight changes were based on results from Nixon’s paper, which focused on modeling the structure of a composite rotor blade and using optimization to...conclude that improved composite design to further reduce weight needs to be achieved. An additionally interesting effect is how the rotor technology
Automated watershed subdivision for simulations using multi-objective optimization
USDA-ARS?s Scientific Manuscript database
The development of watershed management plans to evaluate placement of conservation practices typically involves application of watershed models. Incorporating spatially variable watershed characteristics into a model often requires subdividing the watershed into small areas to accurately account f...
1988-03-23
observations more often. Using this updated satellite orbital element set , a more accurate space surveillance product is generated by ensuring the time span...position were more accurate, observations could be required less frequently by the spacetrack network, the satellite orbital element set would not need to...of the orbit , one that includes the best model of atmospheric drag, will give the best, or most accurate, element set for a satellite. By maintaining
Suppression of Thermal Emission from Exhaust Components Using an Integrated Approach
2002-08-01
design model must, as a minimum, include an accurate estimate of space required for the exhaust , backpressure to the engine , system weight, gas species...hot flovw testing. The virtual design model provides an estimate of space required for: tih exhaust , backiressure to the engine ., svsie:. weigar. gas...either be the engine for the exhaust system or is capable of providing more than the required mass flow rate and enough gas temperature margins so that
Comparison of Varied Precipitation and Soil Data Types for Use in Watershed Modeling.
The accuracy of water quality and quantity models depends on calibration to ensure reliable simulations of streamflow, which in turn requires accurate climatic forcing data. Precipitation is widely acknowledged to be the largest source of uncertainty in watershed modeling, and so...
Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors
Marko, Matthew David; Shevach, Glenn
2017-01-01
A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions. PMID:28076418
Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors.
Marko, Matthew David; Shevach, Glenn
2017-01-01
A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions.
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
NASA Technical Reports Server (NTRS)
Sances, Dillon J.; Gangadharan, Sathya N.; Sudermann, James E.; Marsell, Brandon
2010-01-01
Liquid sloshing within spacecraft propellant tanks causes rapid energy dissipation at resonant modes, which can result in attitude destabilization of the vehicle. Identifying resonant slosh modes currently requires experimental testing and mechanical pendulum analogs to characterize the slosh dynamics. Computational Fluid Dynamics (CFD) techniques have recently been validated as an effective tool for simulating fuel slosh within free-surface propellant tanks. Propellant tanks often incorporate an internal flexible diaphragm to separate ullage and propellant which increases modeling complexity. A coupled fluid-structure CFD model is required to capture the damping effects of a flexible diaphragm on the propellant. ANSYS multidisciplinary engineering software employs a coupled solver for analyzing two-way Fluid Structure Interaction (FSI) cases such as the diaphragm propellant tank system. Slosh models generated by ANSYS software are validated by experimental lateral slosh test results. Accurate data correlation would produce an innovative technique for modeling fuel slosh within diaphragm tanks and provide an accurate and efficient tool for identifying resonant modes and the slosh dynamic response.
A Generic Nonlinear Aerodynamic Model for Aircraft
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2014-01-01
A generic model of the aerodynamic coefficients was developed using wind tunnel databases for eight different aircraft and multivariate orthogonal functions. For each database and each coefficient, models were determined using polynomials expanded about the state and control variables, and an othgonalization procedure. A predicted squared-error criterion was used to automatically select the model terms. Modeling terms picked in at least half of the analyses, which totalled 45 terms, were retained to form the generic nonlinear aerodynamic (GNA) model. Least squares was then used to estimate the model parameters and associated uncertainty that best fit the GNA model to each database. Nonlinear flight simulations were used to demonstrate that the GNA model produces accurate trim solutions, local behavior (modal frequencies and damping ratios), and global dynamic behavior (91% accurate state histories and 80% accurate aerodynamic coefficient histories) under large-amplitude excitation. This compact aerodynamics model can be used to decrease on-board memory storage requirements, quickly change conceptual aircraft models, provide smooth analytical functions for control and optimization applications, and facilitate real-time parametric system identification.
USDA-ARS?s Scientific Manuscript database
To accurately develop a mathematical model for an In-Wheel Motor Unmanned Ground Vehicle (IWM UGV) on soft terrain, parameterization of terrain properties is essential to stochastically model tire-terrain interaction for each wheel independently. Operating in off-road conditions requires paying clos...
NASA Technical Reports Server (NTRS)
Luchini, Chris B.
1997-01-01
Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.
Low-dimensional, morphologically accurate models of subthreshold membrane potential
Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.
2009-01-01
The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
NASA Astrophysics Data System (ADS)
Merla, Yu; Wu, Billy; Yufit, Vladimir; Martinez-Botas, Ricardo F.; Offer, Gregory J.
2018-04-01
Accurate diagnosis of lithium ion battery state-of-health (SOH) is of significant value for many applications, to improve performance, extend life and increase safety. However, in-situ or in-operando diagnosis of SOH often requires robust models. There are many models available however these often require expensive-to-measure ex-situ parameters and/or contain unmeasurable parameters that were fitted/assumed. In this work, we have developed a new empirically parameterised physics-informed equivalent circuit model. Its modular construction and low-cost parametrisation requirements allow end users to parameterise cells quickly and easily. The model is accurate to 19.6 mV for dynamic loads without any global fitting/optimisation, only that of the individual elements. The consequences of various degradation mechanisms are simulated, and the impact of a degraded cell on pack performance is explored, validated by comparison with experiment. Results show that an aged cell in a parallel pack does not have a noticeable effect on the available capacity of other cells in the pack. The model shows that cells perform better when electrodes are more porous towards the separator and have a uniform particle size distribution, validated by comparison with published data. The model is provided with this publication for readers to use.
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiss, T.; Chaney, L.; Meyer, J.
Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Delamination Modeling of Composites for Improved Crash Analysis
NASA Technical Reports Server (NTRS)
Fleming, David C.
1999-01-01
Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the literature. Examples show that it is possible to accurately model delamination propagation in this case. However, the computational demands required for accurate solution are great and reliable property data may not be available to support general crash modeling efforts. Additional examples are modeled including an impact-loaded beam, damage initiation in laminated crushing specimens, and a scaled aircraft subfloor structures in which composite sandwich structures are used as energy-absorbing elements. These examples illustrate some of the difficulties in modeling delamination as part of a finite element crash analysis.
Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources an...
Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources and...
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1995-12-31
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length these design procedures are proving to be inadequate. The result has either been costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two mine faces.
Growing C4 perennial grass for bioenergy using a new Agro-BGC ecosystem model
NASA Astrophysics Data System (ADS)
di Vittorio, A. V.; Anderson, R. S.; Miller, N. L.; Running, S. W.
2009-12-01
Accurate, spatially gridded estimates of bioenergy crop yields require 1) biophysically accurate crop growth models and 2) careful parameterization of unavailable inputs to these models. To meet the first requirement we have added the capacity to simulate C4 perennial grass as a bioenergy crop to the Biome-BGC ecosystem model. This new model, hereafter referred to as Agro-BGC, includes enzyme driven C4 photosynthesis, individual live and dead leaf, stem, and root carbon/nitrogen pools, separate senescence and litter fall processes, fruit growth, optional annual seeding, flood irrigation, a growing degree day phenology with a killing frost option, and a disturbance handler that effectively simulates fertilization, harvest, fire, and incremental irrigation. There are four Agro-BGC vegetation parameters that are unavailable for Panicum virgatum (switchgrass), and to meet the second requirement we have optimized the model across multiple calibration sites to obtain representative values for these parameters. We have verified simulated switchgrass yields against observations at three non-calibration sites in IL. Agro-BGC simulates switchgrass growth and yield at harvest very well at a single site. Our results suggest that a multi-site optimization scheme would be adequate for producing regional-scale estimates of bioenergy crop yields on high spatial resolution grids.
Procedure for the systematic orientation of digitised cranial models. Design and validation.
Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L
2015-12-01
Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Overview of NASA GRC Electrified Aircraft Propulsion Systems Analysis Methods
NASA Technical Reports Server (NTRS)
Schnulo, Sydney
2017-01-01
The accurate modeling and analysis of electrified aircraft propulsion concepts require intricate subsystem system component coupling. The major challenge in electrified aircraft propulsion concept modeling lies in understanding how the subsystems "talk" to each other and the dependencies they have on one another.
RESEARCH ACTIVITIES AT U.S. GOVERNMENT AGENCIES IN SUBSURFACE REACTIVE TRANSPORT MODELING
The fate of contaminants in the environment is controlled by both chemical reactions and transport phenomena in the subsurface. Our ability to understand the significance of these processes over time requires an accurate conceptual model that incorporates the various mechanisms ...
AIR QUALITY MODELING AT COARSE-TO-FINE SCALES IN URBAN AREAS
Urban air toxics control strategies are moving towards a community based modeling approach, with an emphasis on assessing those areas that experience high air toxic concentration levels, the so-called "hot spots". This approach will require information that accurately maps and...
Shooting Free Throws, Probability, and the Golden Ratio
ERIC Educational Resources Information Center
Goodman, Terry
2010-01-01
Part of the power of algebra is that it provides students with tools that they can use to model a variety of problems and applications. Such modeling requires them to understand patterns and choose from a variety of representations--numeric, graphical, symbolic--to construct a model that accurately reflects the relationships found in the original…
Fuel consumption models for pine flatwoods fuel types in the southeastern United States
Clinton S. Wright
2013-01-01
Modeling fire effects, including terrestrial and atmospheric carbon fluxes and pollutant emissions during wildland fires, requires accurate predictions of fuel consumption. Empirical models were developed for predicting fuel consumption from fuel and environmental measurements on a series of operational prescribed fires in pine flatwoods ecosystems in the southeastern...
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Eltahir, Elfatih A. B.
2011-02-01
This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
A fourth order accurate finite difference scheme for the computation of elastic waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.
1986-01-01
A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.
Using structure to explore the sequence alignment space of remote homologs.
Kuziemko, Andrew; Honig, Barry; Petrey, Donald
2011-10-01
Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
COMPARISON OF MEASURED AND MODELED SURFACE FLUXES OF HEAT, MOISTURE, AND CHEMICAL DRY DEPOSITION
Realistic air quality modeling requires accurate simulation of both meteorological and chemical processes within the planetary boundary layer (PBL). n vegetated areas, the primary pathway for surface fluxes of moisture as well a many gaseous chemicals is through vegetative transp...
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.
2008-01-01
Space radiation transport codes require accurate models for hadron production in intermediate energy nucleus-nucleus collisions. Codes require cross sections to be written in terms of lab frame variables and it is important to be able to verify models against experimental data in the lab frame. Several models are compared to lab frame data. It is found that models based on algebraic parameterizations are unable to describe intermediate energy differential cross section data. However, simple thermal model parameterizations, when appropriately transformed from the center of momentum to the lab frame, are able to account for the data.
Evaluating a slope-stability model for shallow rain-induced landslides using gage and satellite data
Yatheendradas, S.; Kirschbaum, D.; Baum, Rex L.; Godt, Jonathan W.
2014-01-01
Improving prediction of landslide early warning systems requires accurate estimation of the conditions that trigger slope failures. This study tested a slope-stability model for shallow rainfall-induced landslides by utilizing rainfall information from gauge and satellite records. We used the TRIGRS model (Transient Rainfall Infiltration and Grid-based Regional Slope-stability analysis) for simulating the evolution of the factor of safety due to rainfall infiltration. Using a spatial subset of a well-characterized digital landscape from an earlier study, we considered shallow failure on a slope adjoining an urban transportation roadway near the Seattle area in Washington, USA.We ran the TRIGRS model using high-quality rain gage and satellite-based rainfall data from the Tropical Rainfall Measuring Mission (TRMM). Preliminary results with parameterized soil depth values suggest that the steeper slope values in this spatial domain have factor of safety values that are extremely close to the failure limit within an extremely narrow range of values, providing multiple false alarms. When the soil depths were constrained using a back analysis procedure to ensure that slopes were stable under initial condtions, the model accurately predicted the timing and location of the landslide observation without false alarms over time for gage rain data. The TRMM satellite rainfall data did not show adequately retreived rainfall peak magnitudes and accumulation over the study period, and as a result failed to predict the landslide event. These preliminary results indicate that more accurate and higher-resolution rain data (e.g., the upcoming Global Precipitation Measurement (GPM) mission) are required to provide accurate and reliable landslide predictions in ungaged basins.
A study of modelling simplifications in ground vibration predictions for railway traffic at grade
NASA Astrophysics Data System (ADS)
Germonpré, M.; Degrande, G.; Lombaert, G.
2017-10-01
Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.
Evaluating the accuracy of wear formulae for acetabular cup liners.
Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng
2010-02-01
This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.
Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji
2012-01-01
Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfallârunoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...
Comparison and analysis of theoretical models for diffusion-controlled dissolution.
Wang, Yanxing; Abrahamsson, Bertil; Lindfors, Lennart; Brasseur, James G
2012-05-07
Dissolution models require, at their core, an accurate diffusion model. The accuracy of the model for diffusion-dominated dissolution is particularly important with the trend toward micro- and nanoscale drug particles. Often such models are based on the concept of a "diffusion layer." Here a framework is developed for diffusion-dominated dissolution models, and we discuss the inadequacy of classical models that are based on an unphysical constant diffusion layer thickness assumption, or do not correctly modify dissolution rate due to "confinement effects": (1) the increase in bulk concentration from confinement of the dissolution process, (2) the modification of the flux model (the Sherwood number) by confinement. We derive the exact mathematical solution for a spherical particle in a confined fluid with impermeable boundaries. Using this solution, we analyze the accuracy of a time-dependent "infinite domain model" (IDM) and "quasi steady-state model" (QSM), both formally derived for infinite domains but which can be applied in approximate fashion to confined dissolution with proper adjustment of a concentration parameter. We show that dissolution rate is sensitive to the degree of confinement or, equivalently, to the total concentration C(tot). The most practical model, the QSM, is shown to be very accurate for most applications and, consequently, can be used with confidence in design-level dissolution models so long as confinement is accurately treated. The QSM predicts the ratio of diffusion layer thickness to particle radius (the Sherwood number) as a constant plus a correction that depends on the degree of confinement. The QSM also predicts that the time required for complete saturation or dissolution in diffusion-controlled dissolution experiments is singular (i.e., infinite) when total concentration equals the solubility. Using the QSM, we show that measured differences in dissolution rate in a diffusion-controlled dissolution experiment are a result of differences in the degree of confinement on the increase in bulk concentration independent of container geometry and polydisperse vs single particle dissolution. We conclude that the constant diffusion-layer thickness assumption is incorrect in principle and should be replaced by the QSM with accurate treatment of confinement in models of diffusion-controlled dissolution.
USDA-ARS?s Scientific Manuscript database
Accurate estimation of energy expenditure (EE) in children and adolescents is required for a better understanding of physiological, behavioral, and environmental factors affecting energy balance. Cross-sectional time series (CSTS) models, which account for correlation structure of repeated observati...
Exact Tests for the Rasch Model via Sequential Importance Sampling
ERIC Educational Resources Information Center
Chen, Yuguo; Small, Dylan
2005-01-01
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…
Development of cropland management dataset to support U.S. SWAT assessments
USDA-ARS?s Scientific Manuscript database
The Soil and Water Assessment Tool (SWAT) is a widely used hydrologic/water quality simulation model in the U.S. Process-based models like SWAT require a great deal of data to accurately represent the natural world, including topography, landuse, soils, weather, and management. With the exception ...
Modeling and scaleup of steamflood in a heterogeneous reservoir
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehghani, K.; Basham, W.M.; Durlofsky, L.J.
1995-11-01
A series of simulation runs was conducted for different geostatistically derived cross-sectional models to study the degree of heterogeneity required for proper modeling of steamfloods in a thick, heavy-oil reservoir with thin diatomite barriers Different methods for coarsening the most detailed models were applied, and performance predictions for the coarsened and detailed models compared. Use of a general scaleup method provided the most accurate coarse grid models.
Prediction of power requirements for a longwall armored face conveyor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broadfoot, A.R.; Betz, R.E.
1997-01-01
Longwall armored face conveyors (AFC`s) have traditionally been designed using a combination of heuristics and simple models. However, as longwalls increase in length, these design procedures are proving to be inadequate. The result has either been a costly loss of production due to AFC stalling or component failure, or larger than necessary capital investment due to overdesign. In order to allow accurate estimation of the power requirements for an AFC, this paper develops a comprehensive model of all the friction forces associated with the AFC. Power requirement predictions obtained from these models are then compared with measurements from two minemore » faces.« less
Modeling Dynamic Regulatory Processes in Stroke.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Jarman, Kenneth D.; Taylor, Ronald C.
2012-10-11
The ability to examine in silico the behavior of biological systems can greatly accelerate the pace of discovery in disease pathologies, such as stroke, where in vivo experimentation is lengthy and costly. In this paper we describe an approach to in silico examination of blood genomic responses to neuroprotective agents and subsequent stroke through the development of dynamic models of the regulatory processes observed in the experimental gene expression data. First, we identified functional gene clusters from these data. Next, we derived ordinary differential equations (ODEs) relating regulators and functional clusters from the data. These ODEs were used to developmore » dynamic models that simulate the expression of regulated functional clusters using system dynamics as the modeling paradigm. The dynamic model has the considerable advantage of only requiring an initial starting state, and does not require measurement of regulatory influences at each time point in order to make accurate predictions. The manipulation of input model parameters, such as changing the magnitude of gene expression, made it possible to assess the behavior of the networks through time under varying conditions. We report that an optimized dynamic model can provide accurate predictions of overall system behavior under several different preconditioning paradigms.« less
[An individual facial shield for a sportsman with an orofacial injury].
de Baat, C; Peters, R; van Iperen-Keiman, C M; de Vleeschouwer, M
2005-05-01
Facial shields are used when practising contact sports, high speed sports, sports using hard balls, sticks or bats, sports using protective shields or covers, and sports using hard boardings around the sports ground. Examples of facial shields are commercially available, per branch of sport standardised helmets. Fabricating individual protective shields is primarily restricted to mouth guards. In individual cases a more extensive facial shield is demanded, for instance in case of a surgically stabilised facial bone fracture. In order to be able to fabricate an extensive individual facial shield, an accurate to the nearest model of the anterior part of the head is required. An accurate model can be provided by making an impression of the face, which is poured in dental stone. Another method is producing a stereolithographic model using computertomography or magnetic resonance imaging. On the accurate model the facial shield can be designed and fabricated from a strictly safe material, such as polyvinylchloride or polycarbonate.
Verification of a 2 kWe Closed-Brayton-Cycle Power Conversion System Mechanical Dynamics Model
NASA Technical Reports Server (NTRS)
Ludwiczak, Damian R.; Le, Dzu K.; McNelis, Anne M.; Yu, Albert C.; Samorezov, Sergey; Hervol, Dave S.
2005-01-01
Vibration test data from an operating 2 kWe closed-Brayton-cycle (CBC) power conversion system (PCS) located at the NASA Glenn Research Center was used for a comparison with a dynamic disturbance model of the same unit. This effort was performed to show that a dynamic disturbance model of a CBC PCS can be developed that can accurately predict the torque and vibration disturbance fields of such class of rotating machinery. The ability to accurately predict these disturbance fields is required before such hardware can be confidently integrated onto a spacecraft mission. Accurate predictions of CBC disturbance fields will be used for spacecraft control/structure interaction analyses and for understanding the vibration disturbances affecting the scientific instrumentation onboard. This paper discusses how test cell data measurements for the 2 kWe CBC PCS were obtained, the development of a dynamic disturbance model used to predict the transient torque and steady state vibration fields of the same unit, and a comparison of the two sets of data.
Methods to achieve accurate projection of regional and global raster databases
Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan
2002-01-01
Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
NPAC-Nozzle Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.
NASA Technical Reports Server (NTRS)
Stroosnijder, L.; Lascano, R. J.; Newton, R. W.; Vanbavel, C. H. M.
1984-01-01
A general method to use a time series of L-band emissivities as an input to a hydrological model for continuously monitoring the net rainfall and evaporation as well as the water content over the entire soil profile is proposed. The model requires a sufficiently accurate and general relation between soil emissivity and surface moisture content. A model which requires the soil hydraulic properties as an additional input, but does not need any weather data was developed. The method is shown to be numerically consistent.
Modelling the physics in iterative reconstruction for transmission computed tomography
Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.
2013-01-01
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
Thermodynamics and combustion modeling
NASA Technical Reports Server (NTRS)
Zeleznik, Frank J.
1986-01-01
Modeling fluid phase phenomena blends the conservation equations of continuum mechanics with the property equations of thermodynamics. The thermodynamic contribution becomes especially important when the phenomena involve chemical reactions as they do in combustion systems. The successful study of combustion processes requires (1) the availability of accurate thermodynamic properties for both the reactants and the products of reaction and (2) the computational capabilities to use the properties. A discussion is given of some aspects of the problem of estimating accurate thermodynamic properties both for reactants and products of reaction. Also, some examples of the use of thermodynamic properties for modeling chemically reacting systems are presented. These examples include one-dimensional flow systems and the internal combustion engine.
Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby
2014-01-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...
Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.
2016-01-01
Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171
NASA Astrophysics Data System (ADS)
Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John
2001-01-01
For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.
Brand, Samuel P C; Rock, Kat S; Keeling, Matt J
2016-04-01
Epidemiological modelling has a vital role to play in policy planning and prediction for the control of vectors, and hence the subsequent control of vector-borne diseases. To decide between competing policies requires models that can generate accurate predictions, which in turn requires accurate knowledge of vector natural histories. Here we highlight the importance of the distribution of times between life-history events, using short-lived midge species as an example. In particular we focus on the distribution of the extrinsic incubation period (EIP) which determines the time between infection and becoming infectious, and the distribution of the length of the gonotrophic cycle which determines the time between successful bites. We show how different assumptions for these periods can radically change the basic reproductive ratio (R0) of an infection and additionally the impact of vector control on the infection. These findings highlight the need for detailed entomological data, based on laboratory experiments and field data, to correctly construct the next-generation of policy-informing models.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
Transforming Multidisciplinary Customer Requirements to Product Design Specifications
NASA Astrophysics Data System (ADS)
Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu
2017-09-01
With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
The effect of different imitation models on theaccuracy and speed of imitation of movement
Nishizawa, Hitomi; Kimura, Teiji; Goh, Ah-Cheng
2015-01-01
[Purpose] The purpose of this study was to compare the accuracy, speed and subjective ease of imitation of movement using three different imitation models. [Subjects] Thirty-four right-handed healthy males participated in this study. [Methods] The imitation task chosen for this study was an asymmetric combined motion of the upper and lower limbs. Three kinds of imitation models were displayed on a screen as follows: a) third person perspective mirror imitation (3PM), b) third person perspective anatomical imitation (3PA), and c) first person perspective ipsilateral imitation (1PI). Subjects were instructed to imitate the movement shown on a screen as quickly and as accurately as possible. They executed four sets of the movement with each set consisting of one trial of each of the three imitation models. [Results] 3PM was the most accurate, and 1PI was the fastest in speed and subjective ease of imitation, compared with the other two imitation models. [Conclusion] These results suggest that 1PI and 3PM, which do not require mental rotation of the movement task as required by 3PA, should be considered more suitable imitation models for teaching healthy subjects how to move. PMID:26696710
The effect of different imitation models on theaccuracy and speed of imitation of movement.
Nishizawa, Hitomi; Kimura, Teiji; Goh, Ah-Cheng
2015-11-01
[Purpose] The purpose of this study was to compare the accuracy, speed and subjective ease of imitation of movement using three different imitation models. [Subjects] Thirty-four right-handed healthy males participated in this study. [Methods] The imitation task chosen for this study was an asymmetric combined motion of the upper and lower limbs. Three kinds of imitation models were displayed on a screen as follows: a) third person perspective mirror imitation (3PM), b) third person perspective anatomical imitation (3PA), and c) first person perspective ipsilateral imitation (1PI). Subjects were instructed to imitate the movement shown on a screen as quickly and as accurately as possible. They executed four sets of the movement with each set consisting of one trial of each of the three imitation models. [Results] 3PM was the most accurate, and 1PI was the fastest in speed and subjective ease of imitation, compared with the other two imitation models. [Conclusion] These results suggest that 1PI and 3PM, which do not require mental rotation of the movement task as required by 3PA, should be considered more suitable imitation models for teaching healthy subjects how to move.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Accuracy in risk assessment, which is desirable in order to ensure protection of the public health while avoiding over-regulation of economically-important substances, requires quantitatively accurate, in vivo descriptions of dose-response and time-course behaviors. This level of...
Rocket Fuel R and D at AFRL: Recent Activities and Future Direction
2017-04-12
Clearance Number 17163 Rocket Cycles and Environments SpaceX Merlin 1D 190 klbf Russian RD-180 860 klbf Gas Generator Cycle Ox-Rich Staged Combustion...affordability & reusability • Modeling & Simulation • Key to development • Requires accurate models “CFD simulations… shorten the test-fail-fix loop” SpaceX
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
Haueisen, J; Ramon, C; Eiselt, M; Brauer, H; Nowak, H
1997-08-01
Modeling in magnetoencephalography (MEG) and electroencephalography (EEG) requires knowledge of the in vivo tissue resistivities of the head. The aim of this paper is to examine the influence of tissue resistivity changes on the neuromagnetic field and the electric scalp potential. A high-resolution finite element method (FEM) model (452,162 elements, 2-mm resolution) of the human head with 13 different tissue types is employed for this purpose. Our main finding was that the magnetic fields are sensitive to changes in the tissue resistivity in the vicinity of the source. In comparison, the electric surface potentials are sensitive to changes in the tissue resistivity in the vicinity of the source and in the vicinity of the position of the electrodes. The magnitude (strength) of magnetic fields and electric surface potentials is strongly influenced by tissue resistivity changes, while the topography is not as strongly influenced. Therefore, an accurate modeling of magnetic field and electric potential strength requires accurate knowledge of tissue resistivities, while for source localization procedures this knowledge might not be a necessity.
Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2016-01-01
Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Early prediction of student goals and affect in narrative-centered learning environments
NASA Astrophysics Data System (ADS)
Lee, Sunyoung
Recent years have seen a growing recognition of the role of goal and affect recognition in intelligent tutoring systems. Goal recognition is the task of inferring users' goals from a sequence of observations of their actions. Because of the uncertainty inherent in every facet of human computer interaction, goal recognition is challenging, particularly in contexts in which users can perform many actions in any order, as is the case with intelligent tutoring systems. Affect recognition is the task of identifying the emotional state of a user from a variety of physical cues, which are produced in response to affective changes in the individual. Accurately recognizing student goals and affect states could contribute to more effective and motivating interactions in intelligent tutoring systems. By exploiting knowledge of student goals and affect states, intelligent tutoring systems can dynamically modify their behavior to better support individual students. To create effective interactions in intelligent tutoring systems, goal and affect recognition models should satisfy two key requirements. First, because incorrectly predicted goals and affect states could significantly diminish the effectiveness of interactive systems, goal and affect recognition models should provide accurate predictions of user goals and affect states. When observations of users' activities become available, recognizers should make accurate early" predictions. Second, goal and affect recognition models should be highly efficient so they can operate in real time. To address key issues, we present an inductive approach to recognizing student goals and affect states in intelligent tutoring systems by learning goals and affect recognition models. Our work focuses on goal and affect recognition in an important new class of intelligent tutoring systems, narrative-centered learning environments. We report the results of empirical studies of induced recognition models from observations of students' interactions in narrative-centered learning environments. Experimental results suggest that induced models can make accurate early predictions of student goals and affect states, and they are sufficiently efficient to meet the real-time performance requirements of interactive learning environments.
NASA Astrophysics Data System (ADS)
Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad
2017-05-01
The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for heating and/or cooling using the adaptive thermal comfort approach and AccuRate predictions were estimated. Significant savings (of about 50 %) in energy consumption in minimising the time required for heating and cooling were achieved by using the adaptive thermal comfort model.
A novel model for estimating organic chemical bioconcentration in agricultural plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, H.; Mackay, D.; Di Guardo, A.
1995-12-31
There is increasing recognition that much human and wildlife exposure to organic contaminants can be traced through the food chain to bioconcentration in vegetation. For risk assessment, there is a need for an accurate model to predict organic chemical concentrations in plants. Existing models range from relatively simple correlations of concentrations using octanol-water or octanol-air partition coefficients, to complex models involving extensive physiological data. To satisfy the need for a relatively accurate model of intermediate complexity, a novel approach has been devised to predict organic chemical concentrations in agricultural plants as a function of soil and air concentrations, without themore » need for extensive plant physiological data. The plant is treated as three compartments, namely, leaves, roots and stems (including fruit and seeds). Data readily available from the literature, including chemical properties, volume, density and composition of each compartment; metabolic and growth rate of plant; and readily obtainable environmental conditions at the site are required as input. Results calculated from the model are compared with observed and experimentally-determined concentrations. It is suggested that the model, which includes a physiological database for agricultural plants, gives acceptably accurate predictions of chemical partitioning between plants, air and soil.« less
Three new models for evaluation of standard involute spur gear mesh stiffness
NASA Astrophysics Data System (ADS)
Liang, Xihui; Zhang, Hongsheng; Zuo, Ming J.; Qin, Yong
2018-02-01
Time-varying mesh stiffness is one of the main internal excitation sources of gear dynamics. Accurate evaluation of gear mesh stiffness is crucial for gear dynamic analysis. This study is devoted to developing new models for spur gear mesh stiffness evaluation. Three models are proposed. The proposed model 1 can give very accurate mesh stiffness result but the gear bore surface must be assumed to be rigid. Enlighted by the proposed model 1, our research discovers that the angular deflection pattern of the gear bore surface of a pair of meshing gears under a constant torque basically follows a cosine curve. Based on this finding, two other models are proposed. The proposed model 2 evaluates gear mesh stiffness by using angular deflections at different circumferential angles of an end surface circle of the gear bore. The proposed model 3 requires using only the angular deflection at an arbitrary circumferential angle of an end surface circle of the gear bore but this model can only be used for a gear with the same tooth profile among all teeth. The proposed models are accurate in gear mesh stiffness evaluation and easy to use. Finite element analysis is used to validate the accuracy of the proposed models.
Automated MRI segmentation for individualized modeling of current flow in the human head.
Huang, Yu; Dmochowski, Jacek P; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C
2013-12-01
High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
Validation of Solar Sail Simulations for the NASA Solar Sail Demonstration Project
NASA Technical Reports Server (NTRS)
Braafladt, Alexander C.; Artusio-Glimpse, Alexandra B.; Heaton, Andrew F.
2014-01-01
NASA's Solar Sail Demonstration project partner L'Garde is currently assembling a flight-like sail assembly for a series of ground demonstration tests beginning in 2015. For future missions of this sail that might validate solar sail technology, it is necessary to have an accurate sail thrust model. One of the primary requirements of a proposed potential technology validation mission will be to demonstrate solar sail thrust over a set time period, which for this project is nominally 30 days. This requirement would be met by comparing a L'Garde-developed trajectory simulation to the as-flown trajectory. The current sail simulation baseline for L'Garde is a Systems Tool Kit (STK) plug-in that includes a custom-designed model of the L'Garde sail. The STK simulation has been verified for a flat plate model by comparing it to the NASA-developed Solar Sail Spaceflight Simulation Software (S5). S5 matched STK with a high degree of accuracy and the results of the validation indicate that the L'Garde STK model is accurate enough to meet the potential future mission requirements. Additionally, since the L'Garde sail deviates considerably from a flat plate, a force model for a non-flat sail provided by L'Garde sail was also tested and compared to a flat plate model in S5. This result will be used in the future as a basis of comparison to the non-flat sail model being developed for STK.
Accurate protein structure modeling using sparse NMR data and homologous structure information.
Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David
2012-06-19
While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.
Fechner's law: where does the log transform come from?
Laming, Donald
2010-01-01
This paper looks at Fechner's law in the light of 150 years of subsequent study. In combination with the normal, equal variance, signal-detection model, Fechner's law provides a numerically accurate account of discriminations between two separate stimuli, essentially because the logarithmic transform delivers a model for Weber's law. But it cannot be taken to be a measure of internal sensation because an equally accurate account is provided by a chi(2) model in which stimuli are scaled by their physical magnitude. The logarithmic transform of Fechner's law arises because, for the number of degrees of freedom typically required in the chi(2) model, the logarithm of a chi(2) variable is, to a good approximation, normal. This argument is set within a general theory of sensory discrimination.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min
2016-12-20
Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Elmiligui, A.; Aftosmis, M.; Morgenstern, J.; Durston, D.; Thomas, S.
2012-01-01
An innovative pressure rail concept for wind tunnel sonic boom testing of modern aircraft configurations with very low overpressures was designed with an adjoint-based solution-adapted Cartesian grid method. The computational method requires accurate free-air calculations of a test article as well as solutions modeling the influence of rail and tunnel walls. Specialized grids for accurate Euler and Navier-Stokes sonic boom computations were used on several test articles including complete aircraft models with flow-through nacelles. The computed pressure signatures are compared with recent results from the NASA 9- x 7-foot Supersonic Wind Tunnel using the advanced rail design.
Bayesian approach to analyzing holograms of colloidal particles.
Dimiduk, Thomas G; Manoharan, Vinothan N
2016-10-17
We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
Global variability in leaf respiration in relation to climate and leaf traits
NASA Astrophysics Data System (ADS)
Atkin, Owen K.
2015-04-01
Leaf respiration plays a vital role in regulating ecosystem functioning and the Earth's climate. Because of this, it is imperative that that Earth-system, climate and ecosystem-level models be able to accurately predict variations in rates of leaf respiration. In the field of photosynthesis research, the F/vC/B model has enabled modellers to accurately predict variations in photosynthesis through time and space. By contrast, we lack an equivalent biochemical model to predict variations in leaf respiration. Consequently, we need to rely on phenomenological approaches to model variations in respiration across the Earth's surface. Such approaches require that we develop a thorough understanding of how rates of respiration vary among species and whether global environmental gradients play a role in determining variations in leaf respiration. Dealing with these issues requires that data sets be assembled on rates of leaf respiration in biomes across the Earth's surface. In this talk, I will use a newly-assembled global database on leaf respiration and associated traits (including photosynthesis) to highlight variation in leaf respiration (and the balance between respiration and photosynthesis) across global gradients in growth temperature and aridity.
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Assessments of aggregate exposure to pesticides and other surface contamination in residential environments are often driven by assumptions about dermal contacts. Accurately predicting cumulative doses from realistic skin contact scenarios requires characterization of exposure sc...
NASA Astrophysics Data System (ADS)
Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan
2018-06-01
This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.
Accurate Modeling Method for Cu Interconnect
NASA Astrophysics Data System (ADS)
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Brennan, John
2015-06-01
Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.
NASA Astrophysics Data System (ADS)
Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel
2016-11-01
This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.
ERIC Educational Resources Information Center
Almond, Russell G.
2007-01-01
Over the course of instruction, instructors generally collect a great deal of information about each student. Integrating that information intelligently requires models for how a student's proficiency changes over time. Armed with such models, instructors can "filter" the data--more accurately estimate the student's current proficiency…
Executive Summary Environmentally responsible development of oil and gas assets requires well-developed emissions inventories and measurement techniques to verify emissions and the effectiveness of control strategies. To accurately model the oil and gas sector impacts on air qual...
A modified force-restore approach to modeling snow-surface heat fluxes
Charles H. Luce; David G. Tarboton
2001-01-01
Accurate modeling of the energy balance of a snowpack requires good estimates of the snow surface temperature. The snow surface temperature allows a balance between atmospheric heat fluxes and the conductive flux into the snowpack. While the dependency of atmospheric fluxes on surface temperature is reasonably well understood and parameterized, conduction of heat from...
Development of a station based climate database for SWAT and APEX assessments in the U.S.
USDA-ARS?s Scientific Manuscript database
Water quality simulation models such as the Soil and Water Assessment Tool (SWAT) and Agricultural Policy EXtender (APEX) are widely used in the U.S. These models require large amounts of spatial and tabular data to simulate the natural world. Accurate and seamless daily climatic data are critical...
An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries
Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David
2010-01-01
Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798
TRIPPy: Python-based Trailed Source Photometry
NASA Astrophysics Data System (ADS)
Fraser, Wesley C.; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michael E.; Pike, Rosemary E.; Kavelaars, JJ; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey
2016-05-01
TRIPPy (TRailed Image Photometry in Python) uses a pill-shaped aperture, a rectangle described by three parameters (trail length, angle, and radius) to improve photometry of moving sources over that done with circular apertures. It can generate accurate model and trailed point-spread functions from stationary background sources in sidereally tracked images. Appropriate aperture correction provides accurate, unbiased flux measurement. TRIPPy requires numpy, scipy, matplotlib, Astropy (ascl:1304.002), and stsci.numdisplay; emcee (ascl:1303.002) and SExtractor (ascl:1010.064) are optional.
Gao, Yaozong; Zhan, Yiqiang
2015-01-01
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Steve; Holland, Marika
The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less
Ground robotic measurement of aeolian processes
USDA-ARS?s Scientific Manuscript database
Models of aeolian processes rely on accurate measurements of the rates of sediment transport by wind, and careful evaluation of the environmental controls of these processes. Existing field approaches typically require intensive, event-based experiments involving dense arrays of instruments. These d...
Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping
NASA Astrophysics Data System (ADS)
Rehak, M.; Skaloud, J.
2015-08-01
In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.
Modeling of profilometry with laser focus sensors
NASA Astrophysics Data System (ADS)
Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner
2011-05-01
Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.
Human eyeball model reconstruction and quantitative analysis.
Xing, Qi; Wei, Qi
2014-01-01
Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.
Probing-models for interdigitated electrode systems with ferroelectric thin films
NASA Astrophysics Data System (ADS)
Nguyen, Cuong H.; Nigon, Robin; Raeder, Trygve M.; Hanke, Ulrik; Halvorsen, Einar; Muralt, Paul
2018-05-01
In this paper, a new method to characterize ferroelectric thin films with interdigitated electrodes is presented. To obtain accurate properties, all parasitic contributions should be subtracted from the measurement results and accurate models for the ferroelectric film are required. Hence, we introduce a phenomenological model for the parasitic capacitance. Moreover, two common analytical models based on conformal transformations are compared and used to calculate the capacitance and the electric field. With a thin film approximation, new simplified electric field and capacitance formulas are derived. By using these formulas, more consistent CV, PV and stress-field loops for samples with different geometries are obtained. In addition, an inhomogeneous distribution of the permittivity due to the non-uniform electric field is modelled by finite element simulation in an iterative way. We observed that this inhomogeneous distribution can be treated as a homogeneous one with an effective value of the permittivity.
A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever
Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J.; Scott, Dana P.; Feldmann, Heinz; Ebihara, Hideki
2016-01-01
Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF. PMID:27976688
A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.
Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki
2016-12-15
Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.
IBS for non-gaussian distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedotov, A.; Sidorin, A.O.; Smirnov, A.V.
In many situations distribution can significantly deviate from Gaussian which requires accurate treatment of IBS. Our original interest in this problem was motivated by the need to have an accurate description of beam evolution due to IBS while distribution is strongly affected by the external electron cooling force. A variety of models with various degrees of approximation were developed and implemented in BETACOOL in the past to address this topic. A more complete treatment based on the friction coefficient and full 3-D diffusion tensor was introduced in BETACOOL at the end of 2007 under the name 'local IBS model'. Suchmore » a model allowed us calculation of IBS for an arbitrary beam distribution. The numerical benchmarking of this local IBS algorithm and its comparison with other models was reported before. In this paper, after briefly describing the model and its limitations, they present its comparison with available experimental data.« less
Ernstoff, Alexi S; Fantke, Peter; Huang, Lei; Jolliet, Olivier
2017-11-01
Specialty software and simplified models are often used to estimate migration of potentially toxic chemicals from packaging into food. Current models, however, are not suitable for emerging applications in decision-support tools, e.g. in Life Cycle Assessment and risk-based screening and prioritization, which require rapid computation of accurate estimates for diverse scenarios. To fulfil this need, we develop an accurate and rapid (high-throughput) model that estimates the fraction of organic chemicals migrating from polymeric packaging materials into foods. Several hundred step-wise simulations optimised the model coefficients to cover a range of user-defined scenarios (e.g. temperature). The developed model, operationalised in a spreadsheet for future dissemination, nearly instantaneously estimates chemical migration, and has improved performance over commonly used model simplifications. When using measured diffusion coefficients the model accurately predicted (R 2 = 0.9, standard error (S e ) = 0.5) hundreds of empirical data points for various scenarios. Diffusion coefficient modelling, which determines the speed of chemical transfer from package to food, was a major contributor to uncertainty and dramatically decreased model performance (R 2 = 0.4, S e = 1). In all, this study provides a rapid migration modelling approach to estimate exposure to chemicals in food packaging for emerging screening and prioritization approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method
Grayver, Alexander V.; Kolev, Tzanio V.
2015-11-01
Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less
Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grayver, Alexander V.; Kolev, Tzanio V.
Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less
Sluiter, Amie; Sluiter, Justin; Wolfrum, Ed; ...
2016-05-20
Accurate and precise chemical characterization of biomass feedstocks and process intermediates is a requirement for successful technical and economic evaluation of biofuel conversion technologies. The uncertainty in primary measurements of the fraction insoluble solid (FIS) content of dilute acid pretreated corn stover slurry is the major contributor to uncertainty in yield calculations for enzymatic hydrolysis of cellulose to glucose. This uncertainty is propagated through process models and impacts modeled fuel costs. The challenge in measuring FIS is obtaining an accurate measurement of insoluble matter in the pretreated materials, while appropriately accounting for all biomass derived components. Three methods were testedmore » to improve this measurement. One used physical separation of liquid and solid phases, and two utilized direct determination of dry matter content in two fractions. We offer a comparison of drying methods. Lastly, our results show utilizing a microwave dryer to directly determine dry matter content is the optimal method for determining FIS, based on the low time requirements and the method optimization done using model slurries.« less
Validation of SCIAMACHY and TOMS UV Radiances Using Ground and Space Observations
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Bhartia, P. K.; Bojkov, B. R.; Kowalewski, M.; Labow, G.; Ahmad, Z.
2004-01-01
Verification of a stratospheric ozone recovery remains a high priority for environmental research and policy definition. Models predict an ozone recovery at a much lower rate than the measured depletion rate observed to date. Therefore improved precision of the satellite and ground ozone observing systems are required over the long term to verify its recovery. We show that validation of satellite radiances from space and from the ground can be a very effective means for correcting long term drifts of backscatter type satellite measurements and can be used to cross calibrate all B W instruments in orbit (TOMS, SBW/2, GOME, SCIAMACHY, OM, GOME-2, OMPS). This method bypasses the retrieval algorithms used for both satellite and ground based measurements that are normally used to validate and correct the satellite data. Radiance comparisons employ forward models and are inherently more accurate than inverse (retrieval) algorithms. This approach however requires well calibrated instruments and an accurate radiative transfer model that accounts for aerosols. TOMS and SCIAMACHY calibrations are checked to demonstrate this method and to demonstrate applicability for long term trends.
Atmospheric densities derived from CHAMP/STAR accelerometer observations
NASA Astrophysics Data System (ADS)
Bruinsma, S.; Tamagnan, D.; Biancale, R.
2004-03-01
The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.
Methods in Computational Cosmology
NASA Astrophysics Data System (ADS)
Vakili, Mohammadjavad
State of the inhomogeneous universe and its geometry throughout cosmic history can be studied by measuring the clustering of galaxies and the gravitational lensing of distant faint galaxies. Lensing and clustering measurements from large datasets provided by modern galaxy surveys will forever shape our understanding of the how the universe expands and how the structures grow. Interpretation of these rich datasets requires careful characterization of uncertainties at different stages of data analysis: estimation of the signal, estimation of the signal uncertainties, model predictions, and connecting the model to the signal through probabilistic means. In this thesis, we attempt to address some aspects of these challenges. The first step in cosmological weak lensing analyses is accurate estimation of the distortion of the light profiles of galaxies by large scale structure. These small distortions, known as the cosmic shear signal, are dominated by extra distortions due to telescope optics and atmosphere (in the case of ground-based imaging). This effect is captured by a kernel known as the Point Spread Function (PSF) that needs to be fully estimated and corrected for. We address two challenges a head of accurate PSF modeling for weak lensing studies. The first challenge is finding the centers of point sources that are used for empirical estimation of the PSF. We show that the approximate methods for centroiding stars in wide surveys are able to optimally saturate the information content that is retrievable from astronomical images in the presence of noise. The fist step in weak lensing studies is estimating the shear signal by accurately measuring the shapes of galaxies. Galaxy shape measurement involves modeling the light profile of galaxies convolved with the light profile of the PSF. Detectors of many space-based telescopes such as the Hubble Space Telescope (HST) sample the PSF with low resolution. Reliable weak lensing analysis of galaxies observed by the HST camera requires knowledge of the PSF at a resolution higher than the pixel resolution of HST. This PSF is called the super-resolution PSF. In particular, we present a forward model of the point sources imaged through filters of the HST WFC3 IR channel. We show that this forward model can accurately estimate the super-resolution PSF. We also introduce a noise model that permits us to robustly analyze the HST WFC3 IR observations of the crowded fields. Then we try to address one of the theoretical uncertainties in modeling of galaxy clustering on small scales. Study of small scale clustering requires assuming a halo model. Clustering of halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies with halo occupation distribution (HOD) assume that halo mass alone is sufficient to characterize the connection between galaxies and halos. However, assembly bias could cause the modeling of galaxy clustering to face systematic effects if the expected number of galaxies in halos is correlated with other halo properties. Using high resolution N-body simulations and the clustering measurements of Sloan Digital Sky Survey (SDSS) DR7 main galaxy sample, we show that modeling of galaxy clustering can slightly improve if we allow the HOD model to depend on halo properties beyond mass. One of the key ingredients in precise parameter inference using galaxy clustering is accurate estimation of the error covariance matrix of clustering measurements. This requires generation of many independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast and accurate method based on low-resolution N-body simulations and an empirical bias model for generating mock catalogs. We use fast particle mesh gravity solvers for generation of dark matter density field and we use Markov Chain Monti Carlo (MCMC) to estimate the bias model that connects dark matter to galaxies. We show that this approach enables the fast generation of mock catalogs that recover clustering at a percent-level accuracy down to quasi-nonlinear scales. Cosmological datasets are interpreted by specifying likelihood functions that are often assumed to be multivariate Gaussian. Likelihood free approaches such as Approximate Bayesian Computation (ABC) can bypass this assumption by introducing a generative forward model of the data and a distance metric for quantifying the closeness of the data and the model. We present the first application of ABC in large scale structure for constraining the connections between galaxies and dark matter halos. We present an implementation of ABC equipped with Population Monte Carlo and a generative forward model of the data that incorporates sample variance and systematic uncertainties. (Abstract shortened by ProQuest.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James; Kuruganti, Teja
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Application of CFD to aerothermal heating problems
NASA Technical Reports Server (NTRS)
Macaraeg, M. G.
1986-01-01
Numerical solutions of the compressible Navier-Stokes equations by an alternating direction implicit scheme, applied to two experimental investigations are presented. The first is cooling by injection of a gas jet through the nose of an ogive-cone, and the second is the aerothermal environment in the gap formed by the wing and elevon section of a test model of the space shuttle. The simulations demonstrate that accurate pressure calculations are easily obtained on a coarse grid, while convergence is obtained after the residual reduces by four orders of magnitude. Accurate heating rates, however, require a fine grid solution, with convergence requiring at least a reduction of six orders of magnitude in the residual. The effect of artificial dissipation on numerical results is also assessed.
Broskey, Nicholas T; Klempel, Monica C; Gilmore, L Anne; Sutton, Elizabeth F; Altazan, Abby D; Burton, Jeffrey H; Ravussin, Eric; Redman, Leanne M
2017-06-01
Weight loss is prescribed to offset the deleterious consequences of polycystic ovary syndrome (PCOS), but a successful intervention requires an accurate assessment of energy requirements. Describe energy requirements in women with PCOS and evaluate common prediction equations compared with doubly labeled water (DLW). Cross-sectional study. Academic research center. Twenty-eight weight-stable women with PCOS completed a 14-day DLW study along with measures of body composition and resting metabolic rate and assessment of physical activity by accelerometry. Total daily energy expenditure (TDEE) determined by DLW. TDEE was 2661 ± 373 kcal/d. TDEE estimated from four commonly used equations was within 4% to 6% of the TDEE measured by DLW. Hyperinsulinemia (fasting insulin and homeostatic model assessment of insulin resistance) was associated with TDEE estimates from all prediction equations (both r = 0.45; P = 0.02) but was not a significant covariate in a model that predicts TDEE. Similarly, hyperandrogenemia (total testosterone, free androgen index, and dehydroepiandrosterone sulfate) was not associated with TDEE. In weight-stable women with PCOS, the following equation derived from DLW can be used to determine energy requirements: TDEE (kcal/d) = 438 - [1.6 * Fat Mass (kg)] + [35.1 * Fat-Free Mass (kg)] + [16.2 * Age (y)]; R2 = 0.41; P = 0.005. Established equations using weight, height, and age performed well for predicting energy requirements in weight-stable women with PCOS, but more precise estimates require an accurate assessment of physical activity. Our equation derived from DLW data, which incorporates habitual physical activity, can also be used in women with PCOS; however, additional studies are needed for model validation. Copyright © 2017 Endocrine Society
Testing waterborne chemical carcinogens in fish models requires accurate, reliable, and reproducible exposures. Because carcinogenesis is a chronic toxicological process and is often associated with prolonged latency periods, systems must accommodate lengthy in-life test periods ...
Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.
Model Performance of Water-Current Meters
Fulford, J.M.; ,
2002-01-01
The measurement of discharge in natural streams requires hydrographers to use accurate water-current meters that have consistent performance among meters of the same model. This paper presents the results of an investigation into the performance of four models of current meters - Price type-AA, Price pygmy, Marsh McBirney 2000 and Swoffer 2100. Tests for consistency and accuracy for six meters of each model are summarized. Variation of meter performance within a model is used as an indicator of consistency, and percent velocity error that is computed from a measured reference velocity is used as an indicator of meter accuracy. Velocities measured by each meter are also compared to the manufacturer's published or advertised accuracy limits. For the meters tested, the Price models werer found to be more accurate and consistent over the range of test velocities compared to the other models. The Marsh McBirney model usually measured within its accuracy specification. The Swoffer meters did not meet the stringent Swoffer accuracy limits for all the velocities tested.
Propellant Chemistry for CFD Applications
NASA Technical Reports Server (NTRS)
Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.
1996-01-01
Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
... performance requirements. Finite element modeling is a mature science and appropriately accurate for modeling... interpretation letter to Jason Backs (CPS Trailers, May 28, 1998). \\3\\ Finite element analysis can be used as a... FMVSS No. 224 that the guard-like structure can serve as a rear impact guard.\\2\\ Sidump'r used a finite...
Aaron B. Berdanier; Chelcy F. Miniat; James S. Clark
2016-01-01
Accurately scaling sap flux observations to tree or stand levels requires accounting for variation in sap flux between wood types and by depth into the tree. However, existing models for radial variation in axial sap flux are rarely used because they are difficult to implement, there is uncertainty about their predictive ability and calibration measurements...
ERIC Educational Resources Information Center
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.
2011-01-01
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
12 CFR 717.23 - Contents of opt-out notice; consolidated and equivalent notices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... right to opt out. (ii) The opt-out notice must explain how an opt-out direction by a joint consumer will... rights to opt out in a single response. (iii) It is impermissible to require all joint consumers to opt... accurately discloses the consumer's opt-out rights. (4) Model notices. Model notices are provided in appendix...
12 CFR 571.23 - Contents of opt-out notice; consolidated and equivalent notices.
Code of Federal Regulations, 2010 CFR
2010-01-01
... right to opt out. (ii) The opt-out notice must explain how an opt-out direction by a joint consumer will... rights to opt out in a single response. (iii) It is impermissible to require all joint consumers to opt... accurately discloses the consumer's opt-out rights. (4) Model notices. Model notices are provided in appendix...
Simulation of Electric Propulsion Thrusters (Preprint)
2011-02-07
activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of
Wind Farm LES Simulations Using an Overset Methodology
NASA Astrophysics Data System (ADS)
Ananthan, Shreyas; Yellapantula, Shashank
2017-11-01
Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.
An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*
Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.
2014-01-01
Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144
Characterization of structural connections using free and forced response test data
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Huckelbridge, Arthur A.
1989-01-01
The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.
NASA Astrophysics Data System (ADS)
Henstridge, Martin C.; Wang, Yijun; Limon-Petersen, Juan G.; Laborda, Eduardo; Compton, Richard G.
2011-11-01
We present a comparative experimental evaluation of the Butler-Volmer and Marcus-Hush models using cyclic voltammetry at a microelectrode. Numerical simulations are used to fit experimental voltammetry of the one electron reductions of europium (III) and 2-methyl-2-nitropropane, in water and acetonitrile, respectively, at a mercury microhemisphere electrode. For Eu (III) very accurate fits to experiment were obtained over a wide range of scan rates using Butler-Volmer kinetics, whereas the Marcus-Hush model was less accurate. The reduction of 2-methyl-2-nitropropane was well simulated by both models, however Marcus-Hush required a reorganisation energy lower than expected.
Modeling Piezoelectric Stack Actuators for Control of Micromanipulation
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Celanovic, Nikola
1997-01-01
A nonlinear lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and, in particular, for microrobotic applications requiring accurate position and/or force control. In formulating this model, the authors propose a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data. Validation is followed by a discussion of model implications for purposes of actuator control.
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea
NASA Astrophysics Data System (ADS)
Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.
2016-02-01
The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.
Stress Mapping in Glass-to-Metal Seals using Indentation Crack Lengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strong, Kevin; Buchheit, Thomas E.; Diebold, Thomas Wayne
Predicting the residual stress which develops during fabrication of a glass-to-metal compression seal requires material models that can accurately predict the effects of processing on the sealing glass. Validation of the predictions requires measurements on representative test geometries to accurately capture the interaction between the seal materials during a processing cycle required to form the seal, which consists of a temperature excursion through the glass transition temperature of the sealing glass. To this end, a concentric seal test geometry, referred to as a short cylinder seal, consisting of a stainless steel shell enveloping a commercial sealing glass disk has beenmore » designed, fabricated, and characterized as a model validation test geometry. To obtain data to test/validate finite element (FE) stress model predictions of this geometry, spatially-resolved residual stress was calculated from the measured lengths of the cracks emanating from radially positioned Vickers indents in the glass disk portion of the seal. The indentation crack length method is described, and the spatially-resolved residual stress determined experimentally are compared to FE stress predictions made using a nonlinear viscoelastic material model adapted to inorganic sealing glasses and an updated rate dependent material model for 304L stainless steel. The measurement method is a first to achieve a degree of success for measuring spatially resolved residual stress in a glass-bearing geometry and a favorable comparison between measurements and simulation was observed.« less
Stress Mapping in Glass-to-Metal Seals using Indentation Crack Lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchheit, Thomas E.; Strong, Kevin; Newton, Clay S.
Predicting the residual stress which develops during fabrication of a glass-to-metal compression seal requires material models that can accurately predict the effects of processing on the sealing glass. Validation of the predictions requires measurements on representative test geometries to accurately capture the interaction between the seal materials during a processing cycle required to form the seal, which consists of a temperature excursion through the glass transition temperature of the sealing glass. To this end, a concentric seal test geometry, referred to as a short cylinder seal, consisting of a stainless steel shell enveloping a commercial sealing glass disk has beenmore » designed, fabricated, and characterized as a model validation test geometry. To obtain data to test/validate finite element (FE) stress model predictions of this geometry, spatially-resolved residual stress was calculated from the measured lengths of the cracks emanating from radially positioned Vickers indents in the glass disk portion of the seal. The indentation crack length method is described, and the spatially-resolved residual stress determined experimentally are compared to FE stress predictions made using a nonlinear viscoelastic material model adapted to inorganic sealing glasses and an updated rate dependent material model for 304L stainless steel. The measurement method is a first to achieve a degree of success for measuring spatially resolved residual stress in a glass-bearing geometry and a favorable comparison between measurements and simulation was observed.« less
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
A curved piezo-structure model: implications on active structural acoustic control.
Henry, J K; Clark, R L
1999-09-01
Current research in Active Structural Acoustic Control (ASAC) relies heavily upon accurately capturing the application physics associated with the structure being controlled. The application of ASAC to aircraft interior noise requires a greater understanding of the dynamics of the curved panels which compose the skin of an aircraft fuselage. This paper presents a model of a simply supported curved panel with attached piezoelectric transducers. The model is validated by comparison to previous work. Further, experimental results for a simply supported curved panel test structure are presented in support of the model. The curvature is shown to affect substantially the dynamics of the panel, the integration of transducers, and the bandwidth required for structural acoustic control.
NASA Technical Reports Server (NTRS)
Muheim, Danniella; Menzel, Michael; Mosier, Gary; Irish, Sandra; Maghami, Peiman; Mehalick, Kimberly; Parrish, Keith
2010-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014. System-level verification of critical performance requirements will rely on integrated observatory models that predict the wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms) through to the wave-front sensor focal plane is met. The assembled models themselves are complex and require the insight of technical experts to assess their ability to meet their objectives. This paper describes the systems engineering and modeling approach used on the JWST through the detailed design phase.
Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle
NASA Astrophysics Data System (ADS)
Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.
2018-05-01
Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Development of a detector model for generation of synthetic radiographs of cargo containers
NASA Astrophysics Data System (ADS)
White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.
2008-05-01
Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be prohibitively expensive, as it would require manufacturing numerous amplifiers, in addition to acquiring the required digital hardware. As an alternative, the time-domain TWT interaction model developed here provides the capability to establish a computational test bench where ISI or bit error rate can be simulated as a function of TWT operating parameters and component geometries. Intermodulation products, harmonic generation, and backward waves can also be monitored with the model for similar correlations. The advancements in computational capabilities and corresponding potential improvements in TWT performance may prove to be the enabling technologies for realizing unprecedented data rates for near real time transmission of the increasingly larger volumes of data demanded by planned commercial and Government satellite communications applications. This work is in support of the Cross Enterprise Technology Development Program in Headquarters' Advanced Technology & Mission Studies Division and the Air Force Office of Scientific Research Small Business Technology Transfer programs.
Plant distributions along salinity and tidal gradients in Oregon tidal marshes
Accurately modeling climate change effects on tidal marshes in the Pacific Northwest requires understanding how plant assemblages and species are presently distributed along gradients of salinity and tidal inundation. We outline on-going field efforts by the EPA and USGS to dete...
Use of Rare Earth Elements in investigations of aeolian processes
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Hamraz, Hamid; Contreras, Marco A; Zhang, Jun
2017-07-28
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
An accurate fatigue damage model for welded joints subjected to variable amplitude loading
NASA Astrophysics Data System (ADS)
Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.
2017-12-01
Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.
Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations
NASA Technical Reports Server (NTRS)
Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick
2017-01-01
Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.
The Fringe-Imaging Skin Friction Technique PC Application User's Manual
NASA Technical Reports Server (NTRS)
Zilliac, Gregory G.
1999-01-01
A personal computer application (CXWIN4G) has been written which greatly simplifies the task of extracting skin friction measurements from interferograms of oil flows on the surface of wind tunnel models. Images are first calibrated, using a novel approach to one-camera photogrammetry, to obtain accurate spatial information on surfaces with curvature. As part of the image calibration process, an auxiliary file containing the wind tunnel model geometry is used in conjunction with a two-dimensional direct linear transformation to relate the image plane to the physical (model) coordinates. The application then applies a nonlinear regression model to accurately determine the fringe spacing from interferometric intensity records as required by the Fringe Imaging Skin Friction (FISF) technique. The skin friction is found through application of a simple expression that makes use of lubrication theory to relate fringe spacing to skin friction.
Test techniques for model development of repetitive service energy storage capacitors
NASA Astrophysics Data System (ADS)
Thompson, M. C.; Mauldin, G. H.
1984-03-01
The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.
Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices.
Jeong, Jenny; Frohberg, Nicholas J; Zhou, Enlu; Sulchek, Todd; Qiu, Peng
2018-01-01
Microfluidics are routinely used to study cellular properties, including the efficient quantification of single-cell biomechanics and label-free cell sorting based on the biomechanical properties, such as elasticity, viscosity, stiffness, and adhesion. Both quantification and sorting applications require optimal design of the microfluidic devices and mathematical modeling of the interactions between cells, fluid, and the channel of the device. As a first step toward building such a mathematical model, we collected video recordings of cells moving through a ridged microfluidic channel designed to compress and redirect cells according to cell biomechanics. We developed an efficient algorithm that automatically and accurately tracked the cell trajectories in the recordings. We tested the algorithm on recordings of cells with different stiffness, and showed the correlation between cell stiffness and the tracked trajectories. Moreover, the tracking algorithm successfully picked up subtle differences of cell motion when passing through consecutive ridges. The algorithm for accurately tracking cell trajectories paves the way for future efforts of modeling the flow, forces, and dynamics of cell properties in microfluidics applications.
Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting
2016-01-01
This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507
Community Sediment Transport Model
2007-01-01
Woods Hole, MA 02543-1598 Phone: (508) 457-2269 Fax: (508) 457-2310 email: csherwood@usgs.gov Timothy Keen Naval Research Laboratory, Code...intended to be used as both a research tool and for practical applications. An accurate and useful model will require coupling sediment-transport with...and time steps range from seconds to minutes. We include higher-resolution sediment- transport calculation modules for research problems but, for
Philip Radtke; David Walker; Jereme Frank; Aaron Weiskittel; Clara DeYoung; David MacFarlane; Grant Domke; Christopher Woodall; John Coulston; James Westfall
2017-01-01
Accurate estimation of forest biomass and carbon stocks at regional to national scales is a key requirement in determining terrestrial carbon sources and sinks on United States (US) forest lands. To that end, comprehensive assessment and testing of alternative volume and biomass models were conducted for individual tree models employed in the component ratio method (...
Atmospheric Models for Over-Ocean Propagation Loss
2015-05-15
Atmospheric Models For Over-Ocean Propagation Loss Bruce McGuffin1 MIT Lincoln Laboratory Introduction Air -to-surface radio links differ from...from radiosonde profiles collected along the Atlantic coast of the United States, in order to accurately estimate high-reliability SHF/EHF air -to...predict required link performance to achieve high reliability at different locations and times of year. Data Acquisition Radiosonde balloons are
Automated MRI Segmentation for Individualized Modeling of Current Flow in the Human Head
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-01-01
Objective High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography (HD-EEG) require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images (MRI) requires labor-intensive manual segmentation, even when leveraging available automated segmentation tools. Also, accurate placement of many high-density electrodes on individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach A fully automated segmentation technique based on Statical Parametric Mapping 8 (SPM8), including an improved tissue probability map (TPM) and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on 4 healthy subjects and 7 stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets. Main results The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view (FOV) extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly. Significance Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials. PMID:24099977
Automated MRI segmentation for individualized modeling of current flow in the human head
NASA Astrophysics Data System (ADS)
Huang, Yu; Dmochowski, Jacek P.; Su, Yuzhuo; Datta, Abhishek; Rorden, Christopher; Parra, Lucas C.
2013-12-01
Objective. High-definition transcranial direct current stimulation (HD-tDCS) and high-density electroencephalography require accurate models of current flow for precise targeting and current source reconstruction. At a minimum, such modeling must capture the idiosyncratic anatomy of the brain, cerebrospinal fluid (CSF) and skull for each individual subject. Currently, the process to build such high-resolution individualized models from structural magnetic resonance images requires labor-intensive manual segmentation, even when utilizing available automated segmentation tools. Also, accurate placement of many high-density electrodes on an individual scalp is a tedious procedure. The goal was to develop fully automated techniques to reduce the manual effort in such a modeling process. Approach. A fully automated segmentation technique based on Statical Parametric Mapping 8, including an improved tissue probability map and an automated correction routine for segmentation errors, was developed, along with an automated electrode placement tool for high-density arrays. The performance of these automated routines was evaluated against results from manual segmentation on four healthy subjects and seven stroke patients. The criteria include segmentation accuracy, the difference of current flow distributions in resulting HD-tDCS models and the optimized current flow intensities on cortical targets.Main results. The segmentation tool can segment out not just the brain but also provide accurate results for CSF, skull and other soft tissues with a field of view extending to the neck. Compared to manual results, automated segmentation deviates by only 7% and 18% for normal and stroke subjects, respectively. The predicted electric fields in the brain deviate by 12% and 29% respectively, which is well within the variability observed for various modeling choices. Finally, optimized current flow intensities on cortical targets do not differ significantly.Significance. Fully automated individualized modeling may now be feasible for large-sample EEG research studies and tDCS clinical trials.
A generalized methodology to characterize composite materials for pyrolysis models
NASA Astrophysics Data System (ADS)
McKinnon, Mark B.
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.
Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen
2015-10-01
Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.
NASA Technical Reports Server (NTRS)
Langhoff, Stephen; Bauschlicher, Charles; Jaffe, Richard
1992-01-01
One of the primary goals of NASA's high-speed research program is to determine the feasibility of designing an environmentally safe commercial supersonic transport airplane. The largest environmental concern is focused on the amount of ozone destroying nitrogen oxides (NO(x)) that would be injected into the lower stratosphere during the cruise portion of the flight. The limitations placed on NO(x) emission require more than an order of magnitude reduction over current engine designs. To develop strategies to meet this goal requires first gaining a fundamental understanding of the combustion chemistry. To accurately model the combustor requires a computational fluid dynamics approach that includes both turbulence and chemistry. Since many of the important chemical processes in this regime involve highly reactive radicals, an experimental determination of the required thermodynamic data and rate constants is often very difficult. Unlike experimental approaches, theoretical methods are as applicable to highly reactive species as stable ones. Also our approximation of treating the dynamics classically becomes more accurate with increasing temperature. In this article we review recent progress in generating thermodynamic properties and rate constants that are required to understand NO(x) formation in the combustion process. We also describe our one-dimensional modeling efforts to validate an NH3 combustion reaction mechanism. We have been working in collaboration with researchers at LeRC, to ensure that our theoretical work is focused on the most important thermodynamic quantities and rate constants required in the chemical data base.
Improved analytic extreme-mass-ratio inspiral model for scoping out eLISA data analysis
NASA Astrophysics Data System (ADS)
Chua, Alvin J. K.; Gair, Jonathan R.
2015-12-01
The space-based gravitational-wave detector eLISA has been selected as the ESA L3 mission, and the mission design will be finalized by the end of this decade. To prepare for mission formulation over the next few years, several outstanding and urgent questions in data analysis will be addressed using mock data challenges, informed by instrument measurements from the LISA Pathfinder satellite launching at the end of 2015. These data challenges will require accurate and computationally affordable waveform models for anticipated sources such as the extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes. Previous data challenges have made use of the well-known analytic EMRI waveforms of Barack and Cutler, which are extremely quick to generate but dephase relative to more accurate waveforms within hours, due to their mismatched radial, polar and azimuthal frequencies. In this paper, we describe an augmented Barack-Cutler model that uses a frequency map to the correct Kerr frequencies, along with updated evolution equations and a simple fit to a more accurate model. The augmented waveforms stay in phase for months and may be generated with virtually no additional computational cost.
10 CFR 431.36 - Compliance Certification.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., 2003, a manufacturer or private labeler shall not distribute in commerce any basic model of an electric... requirements prescribed in this subpart; (iii) All information reported in the Compliance Certification is true, accurate, and complete; and (iv) The manufacturer or private labeler is aware of the penalties associated...
Iterative combination of national phenotype, genotype, pedigree, and foreign information
USDA-ARS?s Scientific Manuscript database
Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...
A transparent model of the human scala tympani cavity.
Rebscher, S J; Talbot, N; Bruszewski, W; Heilmann, M; Brasell, J; Merzenich, M M
1996-01-01
A dimensionally accurate clear model of the human scala tympani has been produced to evaluate the insertion and position of clinically applied intracochlear electrodes for electrical stimulation. Replicates of the human scala tympani were made from low melting point metal alloy (LMA) and from polymethylmeth-acrylate (PMMA) resin. The LMA metal casts were embedded in blocks of epoxy and in clear silicone rubber. After removal of the metal alloy, a cavity was produced that accurately models the human scala tympani. Investment casting molds were made from the PMMA scala tympani casts to enable production of multiple LMA casts from which identical models were fabricated. Total dimensional distortion of the LMA casting process was less than 1% in length and 2% in diameter. The models have been successfully integrated into the design process for the iterative development of advanced intracochlear electrode arrays at UCSF. These fabrication techniques are applicable to a wide range of biomedical design problems that require modelling of visually obscured cavities.
Modeling evaporation from spent nuclear fuel storage pools: A diffusion approach
NASA Astrophysics Data System (ADS)
Hugo, Bruce Robert
Accurate prediction of evaporative losses from light water reactor nuclear power plant (NPP) spent fuel storage pools (SFPs) is important for activities ranging from sizing of water makeup systems during NPP design to predicting the time available to supply emergency makeup water following severe accidents. Existing correlations for predicting evaporation from water surfaces are only optimized for conditions typical of swimming pools. This new approach modeling evaporation as a diffusion process has yielded an evaporation rate model that provided a better fit of published high temperature evaporation data and measurements from two SFPs than other published evaporation correlations. Insights from treating evaporation as a diffusion process include correcting for the effects of air flow and solutes on evaporation rate. An accurate modeling of the effects of air flow on evaporation rate is required to explain the observed temperature data from the Fukushima Daiichi Unit 4 SFP during the 2011 loss of cooling event; the diffusion model of evaporation provides a significantly better fit to this data than existing evaporation models.
NASA Astrophysics Data System (ADS)
Hill, James C.; Liu, Zhenping; Fox, Rodney O.; Passalacqua, Alberto; Olsen, Michael G.
2015-11-01
The multi-inlet vortex reactor (MIVR) has been developed to provide a platform for rapid mixing in the application of flash nanoprecipitation (FNP) for manufacturing functional nanoparticles. Unfortunately, commonly used RANS methods are unable to accurately model this complex swirling flow. Large eddy simulations have also been problematic, as expensive fine grids to accurately model the flow are required. These dilemmas led to the strategy of applying a Delayed Detached Eddy Simulation (DDES) method to the vortex reactor. In the current work, the turbulent swirling flow inside a scaled-up MIVR has been investigated by using a dynamic DDES model. In the DDES model, the eddy viscosity has a form similar to the Smagorinsky sub-grid viscosity in LES and allows the implementation of a dynamic procedure to determine its coefficient. The complex recirculating back flow near the reactor center has been successfully captured by using this dynamic DDES model. Moreover, the simulation results are found to agree with experimental data for mean velocity and Reynolds stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less
Simulations of eddy kinetic energy transport in barotropic turbulence
NASA Astrophysics Data System (ADS)
Grooms, Ian
2017-11-01
Eddy energy transport in rotating two-dimensional turbulence is investigated using numerical simulation. Stochastic forcing is used to generate an inhomogeneous field of turbulence and the time-mean energy profile is diagnosed. An advective-diffusive model for the transport is fit to the simulation data by requiring the model to accurately predict the observed time-mean energy distribution. Isotropic harmonic diffusion of energy is found to be an accurate model in the case of uniform, solid-body background rotation (the f plane), with a diffusivity that scales reasonably well with a mixing-length law κ ∝V ℓ , where V and ℓ are characteristic eddy velocity and length scales. Passive tracer dynamics are added and it is found that the energy diffusivity is 75 % of the tracer diffusivity. The addition of a differential background rotation with constant vorticity gradient β leads to significant changes to the energy transport. The eddies generate and interact with a mean flow that advects the eddy energy. Mean advection plus anisotropic diffusion (with reduced diffusivity in the direction of the background vorticity gradient) is moderately accurate for flows with scale separation between the eddies and mean flow, but anisotropic diffusion becomes a much less accurate model of the transport when scale separation breaks down. Finally, it is observed that the time-mean eddy energy does not look like the actual eddy energy distribution at any instant of time. In the future, stochastic models of the eddy energy transport may prove more useful than models of the mean transport for predicting realistic eddy energy distributions.
New results in gravity dependent two-phase flow regime mapping
NASA Astrophysics Data System (ADS)
Kurwitz, Cable; Best, Frederick
2002-01-01
Accurate prediction of thermal-hydraulic parameters, such as the spatial gas/liquid orientation or flow regime, is required for implementation of two-phase systems. Although many flow regime transition models exist, accurate determination of both annular and slug regime boundaries is not well defined especially at lower flow rates. Furthermore, models typically indicate the regime as a sharp transition where data may indicate a transition space. Texas A&M has flown in excess of 35 flights aboard the NASA KC-135 aircraft with a unique two-phase package. These flights have produced a significant database of gravity dependent two-phase data including visual observations for flow regime identification. Two-phase flow tests conducted during recent zero-g flights have added to the flow regime database and are shown in this paper with comparisons to selected transition models. .
Measurement of Muon Neutrino Quasielastic Scattering on Carbon
NASA Astrophysics Data System (ADS)
Aguilar-Arevalo, A. A.; Bazarko, A. O.; Brice, S. J.; Brown, B. C.; Bugel, L.; Cao, J.; Coney, L.; Conrad, J. M.; Cox, D. C.; Curioni, A.; Djurcic, Z.; Finley, D. A.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Green, C.; Green, J. A.; Hart, T. L.; Hawker, E.; Imlay, R.; Johnson, R. A.; Kasper, P.; Katori, T.; Kobilarcik, T.; Kourbanis, I.; Koutsoliotas, S.; Laird, E. M.; Link, J. M.; Liu, Y.; Liu, Y.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Martin, P. S.; McGregor, G.; Metcalf, W.; Meyers, P. D.; Mills, F.; Mills, G. B.; Monroe, J.; Moore, C. D.; Nelson, R. H.; Nienaber, P.; Ouedraogo, S.; Patterson, R. B.; Perevalov, D.; Polly, C. C.; Prebys, E.; Raaf, J. L.; Ray, H.; Roe, B. P.; Russell, A. D.; Sandberg, V.; Schirato, R.; Schmitz, D.; Shaevitz, M. H.; Shoemaker, F. C.; Smith, D.; Sorel, M.; Spentzouris, P.; Stancu, I.; Stefanski, R. J.; Sung, M.; Tanaka, H. A.; Tayloe, R.; Tzanov, M.; van de Water, R.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Yang, H. J.; Zeller, G. P.; Zimmerman, E. D.
2008-01-01
The observation of neutrino oscillations is clear evidence for physics beyond the standard model. To make precise measurements of this phenomenon, neutrino oscillation experiments, including MiniBooNE, require an accurate description of neutrino charged current quasielastic (CCQE) cross sections to predict signal samples. Using a high-statistics sample of νμ CCQE events, MiniBooNE finds that a simple Fermi gas model, with appropriate adjustments, accurately characterizes the CCQE events observed in a carbon-based detector. The extracted parameters include an effective axial mass, MAeff=1.23±0.20GeV, that describes the four-momentum dependence of the axial-vector form factor of the nucleon, and a Pauli-suppression parameter, κ=1.019±0.011. Such a modified Fermi gas model may also be used by future accelerator-based experiments measuring neutrino oscillations on nuclear targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
Some problems of the calculation of three-dimensional boundary layer flows on general configurations
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.
1973-01-01
An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.
Energy balance of stellar coronae. III - Effect of stellar mass and radius
NASA Technical Reports Server (NTRS)
Hammer, R.
1984-01-01
A homologous transformation is derived which permits the application of the numerical coronal models of Hammer from a star with solar mass and radius to other stars. This scaling requires a few approximations concerning the lower boundary conditions and the temperature dependence of the conductivity and emissivity. These approximations are discussed and found to be surprisingly mild. Therefore, the scaling of the coronal models to other stars is rather accurate; it is found to be particularly accurate for main-sequence stars. The transformation is used to derive an equation that gives the maximum temperature of open coronal regions as a function of stellar mass and radius, the coronal heating flux, and the characteristic damping length over which the corona is heated.
Prediction equation for estimating total daily energy requirements of special operations personnel.
Barringer, N D; Pasiakos, S M; McClung, H L; Crombie, A P; Margolis, L M
2018-01-01
Special Operations Forces (SOF) engage in a variety of military tasks with many producing high energy expenditures, leading to undesired energy deficits and loss of body mass. Therefore, the ability to accurately estimate daily energy requirements would be useful for accurate logistical planning. Generate a predictive equation estimating energy requirements of SOF. Retrospective analysis of data collected from SOF personnel engaged in 12 different SOF training scenarios. Energy expenditure and total body water were determined using the doubly-labeled water technique. Physical activity level was determined as daily energy expenditure divided by resting metabolic rate. Physical activity level was broken into quartiles (0 = mission prep, 1 = common warrior tasks, 2 = battle drills, 3 = specialized intense activity) to generate a physical activity factor (PAF). Regression analysis was used to construct two predictive equations (Model A; body mass and PAF, Model B; fat-free mass and PAF) estimating daily energy expenditures. Average measured energy expenditure during SOF training was 4468 (range: 3700 to 6300) Kcal·d- 1 . Regression analysis revealed that physical activity level ( r = 0.91; P < 0.05) and body mass ( r = 0.28; P < 0.05; Model A), or fat-free mass (FFM; r = 0.32; P < 0.05; Model B) were the factors that most highly predicted energy expenditures. Predictive equations coupling PAF with body mass (Model A) and FFM (Model B), were correlated ( r = 0.74 and r = 0.76, respectively) and did not differ [mean ± SEM: Model A; 4463 ± 65 Kcal·d - 1 , Model B; 4462 ± 61 Kcal·d - 1 ] from DLW measured energy expenditures. By quantifying and grouping SOF training exercises into activity factors, SOF energy requirements can be predicted with reasonable accuracy and these equations used by dietetic/logistical personnel to plan appropriate feeding regimens to meet SOF nutritional requirements across their mission profile.
Molecular Sieve Bench Testing and Computer Modeling
NASA Technical Reports Server (NTRS)
Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.
1995-01-01
The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...
2017-07-12
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
Towards Assessing the Human Trajectory Planning Horizon
Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models. PMID:27936015
Towards Assessing the Human Trajectory Planning Horizon.
Carton, Daniel; Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models.
Development of mapped stress-field boundary conditions based on a Hill-type muscle model.
Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A
2014-09-01
Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.
ESTUARINE-OCEAN EXCHANGE IN A NORTH PACIFIC ESTUARY: COMPARISON OF STEADY STATE AND DYNAMIC MODELS
Nutrient levels in coastal waters must be accurately assessed to determine the nutrient effects of increasing populations on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources and sinks, and to ul...
Accurately quantifying human exposures and doses of various populations to environmental pollutants is critical for the Agency to assess and manage human health risks. For example, the Food Quality Protection Act of 1996 (FQPA) requires EPA to consider aggregate human exposure ...
Microwave Soil Moisture Retrieval Under Trees Using a Modified Tau-Omega Model
USDA-ARS?s Scientific Manuscript database
IPAD is to provide timely and accurate estimates of global crop conditions for use in up-to-date commodity intelligence reports. A crucial requirement of these global crop yield forecasts is the regional characterization of surface and sub-surface soil moisture. However, due to the spatial heterogen...
The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here, we present the design, testing, and analysis of data collected through the first instrument capable of measuring ...
The development of effective measures to stabilize atmospheric 22 CO2 concentration and mitigate negative impacts of climate change requires accurate quantification of the spatial variation and magnitude of the terrestrial carbon (C) flux. However, the spatial pattern and strengt...
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
USDA-ARS?s Scientific Manuscript database
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
40 CFR 61.273 - Alternative means of emission limitation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.273 Alternative means of emission limitation... benzene emissions at least equivalent to the reduction in emissions achieved by any requirement in § 61... full-size or scale-model storage vessels that accurately collect and measure all benzene emissions from...
40 CFR 61.273 - Alternative means of emission limitation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.273 Alternative means of emission limitation... benzene emissions at least equivalent to the reduction in emissions achieved by any requirement in § 61... full-size or scale-model storage vessels that accurately collect and measure all benzene emissions from...
40 CFR 61.273 - Alternative means of emission limitation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.273 Alternative means of emission limitation... benzene emissions at least equivalent to the reduction in emissions achieved by any requirement in § 61... full-size or scale-model storage vessels that accurately collect and measure all benzene emissions from...
40 CFR 61.273 - Alternative means of emission limitation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.273 Alternative means of emission limitation... benzene emissions at least equivalent to the reduction in emissions achieved by any requirement in § 61... full-size or scale-model storage vessels that accurately collect and measure all benzene emissions from...
40 CFR 61.273 - Alternative means of emission limitation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for Benzene Emissions From Benzene Storage Vessels § 61.273 Alternative means of emission limitation... benzene emissions at least equivalent to the reduction in emissions achieved by any requirement in § 61... full-size or scale-model storage vessels that accurately collect and measure all benzene emissions from...
Estimating forest canopy fuel parameters using LIDAR data.
Hans-Erik Andersen; Robert J. McGaughey; Stephen E. Reutebuch
2005-01-01
Fire researchers and resource managers are dependent upon accurate, spatially-explicit forest structure information to support the application of forest fire behavior models. In particular, reliable estimates of several critical forest canopy structure metrics, including canopy bulk density, canopy height, canopy fuel weight, and canopy base height, are required to...
Behavioral modeling of VCSELs for high-speed optical interconnects
NASA Astrophysics Data System (ADS)
Szczerba, Krzysztof; Kocot, Chris
2018-02-01
Transition from on-off keying to 4-level pulse amplitude modulation (PAM) in VCSEL based optical interconnects allows for an increase of data rates, at the cost of 4.8 dB sensitivity penalty. The resulting strained link budget creates a need for accurate VCSEL models for driver integrated circuit (IC) design and system level simulations. Rate equation based equivalent circuit models are convenient for the IC design, but system level analysis requires computationally efficient closed form behavioral models based Volterra series and neural networks. In this paper we present and compare these models.
A Software Tool for Integrated Optical Design Analysis
NASA Technical Reports Server (NTRS)
Moore, Jim; Troy, Ed; DePlachett, Charles; Montgomery, Edward (Technical Monitor)
2001-01-01
Design of large precision optical systems requires multi-disciplinary analysis, modeling, and design. Thermal, structural and optical characteristics of the hardware must be accurately understood in order to design a system capable of accomplishing the performance requirements. The interactions between each of the disciplines become stronger as systems are designed lighter weight for space applications. This coupling dictates a concurrent engineering design approach. In the past, integrated modeling tools have been developed that attempt to integrate all of the complex analysis within the framework of a single model. This often results in modeling simplifications and it requires engineering specialist to learn new applications. The software described in this presentation addresses the concurrent engineering task using a different approach. The software tool, Integrated Optical Design Analysis (IODA), uses data fusion technology to enable a cross discipline team of engineering experts to concurrently design an optical system using their standard validated engineering design tools.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy
1987-01-01
The effect of element size on the solution accuracies of finite-element heat transfer and thermal stress analyses of space shuttle orbiter was investigated. Several structural performance and resizing (SPAR) thermal models and NASA structural analysis (NASTRAN) structural models were set up for the orbiter wing midspan bay 3. The thermal model was found to be the one that determines the limit of finite-element fineness because of the limitation of computational core space required for the radiation view factor calculations. The thermal stresses were found to be extremely sensitive to a slight variation of structural temperature distributions. The minimum degree of element fineness required for the thermal model to yield reasonably accurate solutions was established. The radiation view factor computation time was found to be insignificant compared with the total computer time required for the SPAR transient heat transfer analysis.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
Smalheiser, Neil R; McDonagh, Marian S; Yu, Clement; Adams, Clive E; Davis, John M; Yu, Philip S
2015-01-01
Objective: For many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT. Materials and Methods: The LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article. Results: The model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well. Discussion: Both models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified. Conclusion: Retagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi. PMID:25656516
Project M: Scale Model of Lunar Landing Site of Apollo 17: Focus on Lighting Conditions and Analysis
NASA Technical Reports Server (NTRS)
Vanik, Christopher S.; Crain, Timothy P.
2010-01-01
This document captures the research and development of a scale model representation of the Apollo 17 landing site on the moon as part of the NASA INSPIRE program. Several key elements in this model were surface slope characteristics, crater sizes and locations, prominent rocks, and lighting conditions. This model supports development of Autonomous Landing and Hazard Avoidance Technology (ALHAT) and Project M for the GN&C Autonomous Flight Systems Branch. It will help project engineers visualize the landing site, and is housed in the building 16 Navigation Systems Technology Lab. The lead mentor was Dr. Timothy P. Crain. The purpose of this project was to develop an accurate scale representation of the Apollo 17 landing site on the moon. This was done on an 8'2.5"X10'1.375" reduced friction granite table, which can be restored to its previous condition if needed. The first step in this project was to research the best way to model and recreate the Apollo 17 landing site for the mockup. The project required a thorough plan, budget, and schedule, which was presented to the EG6 Branch for build approval. The final phase was to build the model. The project also required thorough research on the Apollo 17 landing site and the topography of the moon. This research was done on the internet and in person with Dean Eppler, a space scientist, from JSC KX. This data was used to analyze and calculate the scale of the mockup and the ratio of the sizes of the craters, ridges, etc. The final goal was to effectively communicate project status and demonstrate the multiple advantages of using our model. The conclusion of this project was that the mockup was completed as accurately as possible, and it successfully enables the Project M specialists to visualize and plan their goal on an accurate three dimensional surface representation.
Customer-experienced rapid prototyping
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Zhang, Fu; Li, Anbo
2008-12-01
In order to describe accurately and comprehend quickly the perfect GIS requirements, this article will integrate the ideas of QFD (Quality Function Deployment) and UML (Unified Modeling Language), and analyze the deficiency of prototype development model, and will propose the idea of the Customer-Experienced Rapid Prototyping (CE-RP) and describe in detail the process and framework of the CE-RP, from the angle of the characteristics of Modern-GIS. The CE-RP is mainly composed of Customer Tool-Sets (CTS), Developer Tool-Sets (DTS) and Barrier-Free Semantic Interpreter (BF-SI) and performed by two roles of customer and developer. The main purpose of the CE-RP is to produce the unified and authorized requirements data models between customer and software developer.
Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA
1989-01-01
Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less
Some anomalies observed in wind-tunnel tests of a blunt body at transonic and supersonic speeds
NASA Technical Reports Server (NTRS)
Brooks, J. D.
1976-01-01
An investigation of anomalies observed in wind tunnel force tests of a blunt body configuration was conducted at Mach numbers from 0.20 to 1.35 in the Langley 8-foot transonic pressure tunnel and at Mach numbers of 1.50, 1,80, and 2.16 in the Langley Unitary Plan wind tunnel. At a Mach number of 1.35, large variations occurred in axial force coefficient at a given angle of attack. At transonic and low supersonic speeds, the total drag measured in the wind tunnel was much lower than that measured during earlier ballistic range tests. Accurate measurements of total drag for blunt bodies will require the use of models smaller than those tested thus far; however, it appears that accurate forebody drag results can be obtained by using relatively large models. Shock standoff distance is presented from experimental data over the Mach number range from 1.05 to 4.34. Theory accurately predicts the shock standoff distance at Mach numbers up to 1.75.
Cars Thermometry in a Supersonic Combustor for CFD Code Validation
NASA Technical Reports Server (NTRS)
Cutler, A. D.; Danehy, P. M.; Springer, R. R.; DeLoach, R.; Capriotti, D. P.
2002-01-01
An experiment has been conducted to acquire data for the validation of computational fluid dynamics (CFD) codes used in the design of supersonic combustors. The primary measurement technique is coherent anti-Stokes Raman spectroscopy (CARS), although surface pressures and temperatures have also been acquired. Modern- design- of-experiment techniques have been used to maximize the quality of the data set (for the given level of effort) and minimize systematic errors. The combustor consists of a diverging duct with single downstream- angled wall injector. Nominal entrance Mach number is 2 and enthalpy nominally corresponds to Mach 7 flight. Temperature maps are obtained at several planes in the flow for two cases: in one case the combustor is piloted by injecting fuel upstream of the main injector, the second is not. Boundary conditions and uncertainties are adequately characterized. Accurate CFD calculation of the flow will ultimately require accurate modeling of the chemical kinetics and turbulence-chemistry interactions as well as accurate modeling of the turbulent mixing
Large eddy simulation modeling of particle-laden flows in complex terrain
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.
2017-12-01
The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.
Dury, Alain Y; Ke, Yuyong; Labrie, Fernand
2016-09-01
A series of steroids present in the brain have been named "neurosteroids" following the possibility of their role in the central nervous system impairments such as anxiety disorders, depression, premenstrual dysphoric disorder (PMDD), addiction, or even neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. Study of their potential role requires a sensitive and accurate assay of their concentration in the monkey brain, the closest model to the human. We have thus developed a robust, precise and accurate liquid chromatography-tandem mass spectrometry method for the assay of pregnenolone, pregnanolone, epipregnanolone, allopregnanolone, epiallopregnanolone, and androsterone in the cynomolgus monkey brain. The extraction method includes a thorough sample cleanup using protein precipitation and phospholipid removal, followed by hexane liquid-liquid extraction and a Girard T ketone-specific derivatization. This method opens the possibility of investigating the potential implication of these six steroids in the most suitable animal model for neurosteroid-related research. Copyright © 2016 Elsevier Inc. All rights reserved.
Direct Scaling of Leaf-Resolving Biophysical Models from Leaves to Canopies
NASA Astrophysics Data System (ADS)
Bailey, B.; Mahaffee, W.; Hernandez Ochoa, M.
2017-12-01
Recent advances in the development of biophysical models and high-performance computing have enabled rapid increases in the level of detail that can be represented by simulations of plant systems. However, increasingly detailed models typically require increasingly detailed inputs, which can be a challenge to accurately specify. In this work, we explore the use of terrestrial LiDAR scanning data to accurately specify geometric inputs for high-resolution biophysical models that enables direct up-scaling of leaf-level biophysical processes. Terrestrial LiDAR scans generate "clouds" of millions of points that map out the geometric structure of the area of interest. However, points alone are often not particularly useful in generating geometric model inputs, as additional data processing techniques are required to provide necessary information regarding vegetation structure. A new method was developed that directly reconstructs as many leaves as possible that are in view of the LiDAR instrument, and uses a statistical backfilling technique to ensure that the overall leaf area and orientation distribution matches that of the actual vegetation being measured. This detailed structural data is used to provide inputs for leaf-resolving models of radiation, microclimate, evapotranspiration, and photosynthesis. Model complexity is afforded by utilizing graphics processing units (GPUs), which allows for simulations that resolve scales ranging from leaves to canopies. The model system was used to explore how heterogeneity in canopy architecture at various scales affects scaling of biophysical processes from leaves to canopies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser
Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Matching mice to malignancy: molecular subgroups and models of medulloblastoma
Lau, Jasmine; Schmidt, Christin; Markant, Shirley L.; Taylor, Michael D.; Wechsler-Reya, Robert J.
2012-01-01
Introduction Medulloblastoma, the largest group of embryonal brain tumors, has historically been classified into five variants based on histopathology. More recently, epigenetic and transcriptional analyses of primary tumors have sub-classified medulloblastoma into four to six subgroups, most of which are incongruous with histopathological classification. Discussion Improved stratification is required for prognosis and development of targeted treatment strategies, to maximize cure and minimize adverse effects. Several mouse models of medulloblastoma have contributed both to an improved understanding of progression and to developmental therapeutics. In this review, we summarize the classification of human medulloblastoma subtypes based on histopathology and molecular features. We describe existing genetically engineered mouse models, compare these to human disease, and discuss the utility of mouse models for developmental therapeutics. Just as accurate knowledge of the correct molecular subtype of medulloblastoma is critical to the development of targeted therapy in patients, we propose that accurate modeling of each subtype of medulloblastoma in mice will be necessary for preclinical evaluation and optimization of those targeted therapies. PMID:22315164
Field measurement of moisture-buffering model inputs for residential buildings
Woods, Jason; Winkler, Jon
2016-02-05
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies. PMID:29466394
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Abhishek
One of the essential requirements of biomolecular modeling is an accurate description of water as a solvent. The challenge is to make this description computationally facile - reasonably fast, simple, robust and easy to incorporate into existing software packages, yet accurate. The most rigorous procedure to model the effect of aqueous solvent is to explicitly model every water molecule in the system. For many practical applications, this approach is computationally too intense, as the number of required water atoms is on an average at least one order of magnitude larger than the number of atoms of the molecule of interest. Implicit solvent models, in which solvent molecules are replaced by a continuous dielectric, have become a popular alternative to explicit solvent methods. However, implicit solvation models often lack various microscopic details which are crucial for accuracy. One such missing effect that is currently missing from popular implicit models is the so called effect of charge hydration asymmetry (CHA). The missing effect of charge hydration asymmetry - the asymmetric response of water upon the sign of solute charge - manifests a characteristic, strong dependence of solvation free energies on the sign of solute charge. Here, we incorporate this missing effect into the continuum solvation framework via the conceptually simplest Born equation and also in the generalized Born model. We identify the key electric multipole moments of model water molecules critical for the various degrees of CHA effect observed in studies based on molecular dynamics simulations using different rigid water models. We then use this gained insight to incorporate this effect first into the Born model and then into the generalized Born model. The proposed framework significantly improves accuracy of the hydration free energy estimates tested on a comprehensive set of varied molecular solutes - monovalent and divalent ions, small drug-like molecules, charged and uncharged amino acid dipeptides, and small proteins. We finally develop a methodology to resolve the issue with unacceptably large uncertainty that stems from a variety of fundamental and technical difficulties in experimental quantification of CHA from charged solutes. Using the proposed corrections in the continuum framework, we untangle the charge-asymmetric response of water from its symmetric response, and further circumvent the difficulties by extracting accurate estimate propensity of water to cause CHA from accurate experimental hydration free energies of neutral polar molecules. We show that the asymmetry in water's response is strong, about 50% of the symmetric response.
Revisiting low-fidelity two-fluid models for gas-solids transport
NASA Astrophysics Data System (ADS)
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
NASA Technical Reports Server (NTRS)
Lee, J.
1994-01-01
A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.
Energy-based dosimetry of low-energy, photon-emitting brachytherapy sources
NASA Astrophysics Data System (ADS)
Malin, Martha J.
Model-based dose calculation algorithms (MBDCAs) for low-energy, photon-emitting brachytherapy sources have advanced to the point where the algorithms may be used in clinical practice. Before these algorithms can be used, a methodology must be established to verify the accuracy of the source models used by the algorithms. Additionally, the source strength metric for these algorithms must be established. This work explored the feasibility of verifying the source models used by MBDCAs by measuring the differential photon fluence emitted from the encapsulation of the source. The measured fluence could be compared to that modeled by the algorithm to validate the source model. This work examined how the differential photon fluence varied with position and angle of emission from the source, and the resolution that these measurements would require for dose computations to be accurate to within 1.5%. Both the spatial and angular resolution requirements were determined. The techniques used to determine the resolution required for measurements of the differential photon fluence were applied to determine why dose-rate constants determined using a spectroscopic technique disagreed with those computed using Monte Carlo techniques. The discrepancy between the two techniques had been previously published, but the cause of the discrepancy was not known. This work determined the impact that some of the assumptions used by the spectroscopic technique had on the accuracy of the calculation. The assumption of isotropic emission was found to cause the largest discrepancy in the spectroscopic dose-rate constant. Finally, this work improved the instrumentation used to measure the rate at which energy leaves the encapsulation of a brachytherapy source. This quantity is called emitted power (EP), and is presented as a possible source strength metric for MBDCAs. A calorimeter that measured EP was designed and built. The theoretical framework that the calorimeter relied upon to measure EP was established. Four clinically relevant 125I brachytherapy sources were measured with the instrument. The accuracy of the measured EP was compared to an air-kerma strength-derived EP to test the accuracy of the instrument. The instrument was accurate to within 10%, with three out of the four source measurements accurate to within 4%.
Progressive Classification Using Support Vector Machines
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Kocurek, Michael
2009-01-01
An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.
Experimental evaluation of a recursive model identification technique for type 1 diabetes.
Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E
2009-09-01
A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
Use of the DISST Model to Estimate the HOMA and Matsuda Indexes Using Only a Basal Insulin Assay
Docherty, Paul D.; Chase, J. Geoffrey
2014-01-01
Background: It is hypothesized that early detection of reduced insulin sensitivity (SI) could prompt intervention that may reduce the considerable financial strain type 2 diabetes mellitus (T2DM) places on global health care. Reduction of the cost of already inexpensive SI metrics such as the Matsuda and HOMA indexes would enable more widespread, economically feasible use of these metrics for screening. The goal of this research was to determine a means of reducing the number of insulin samples and therefore the cost required to provide an accurate Matsuda Index value. Method: The Dynamic Insulin Sensitivity and Secretion Test (DISST) model was used with the glucose and basal insulin measurements from an Oral Glucose Tolerance Test (OGTT) to predict patient insulin responses. The insulin response to the OGTT was determined via population based regression analysis that incorporated the 60-minute glucose and basal insulin values. Results: The proposed method derived accurate and precise Matsuda Indices as compared to the fully sampled Matsuda (R = .95) using only the basal assay insulin-level data and 4 glucose measurements. Using a model employing the basal insulin also allows for determination of the 1-day HOMA value. Conclusion: The DISST model was successfully modified to allow for the accurate prediction an individual’s insulin response to the OGTT. In turn, this enabled highly accurate and precise estimation of a Matsuda Index using only the glucose and basal insulin assays. As insulin assays account for the majority of the cost of the Matsuda Index, this model offers a significant reduction in assay cost. PMID:24876431
Su, Yewang; Liu, Zhuangjian; Xu, Lizhi
2016-04-20
Recently developed concepts for 3D, organ-mounted electronics for cardiac applications require a universal and easy-to-use mechanical model to calculate the average pressure associated with operation of the device, which is crucial for evaluation of design efficacy and optimization. This work proposes a simple, accurate, easy-to-use, and universal model to quantify the average pressure for arbitrary-shape organs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Analytical prediction of digital signal crosstalk of FCC
NASA Technical Reports Server (NTRS)
Belleisle, A. P.
1972-01-01
The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.
Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.
2007-01-01
The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
NASA Technical Reports Server (NTRS)
Min, J. B.; Reddy, T. S. R.; Bakhle, M. A.; Coroneos, R. M.; Stefko, G. L.; Provenza, A. J.; Duffy, K. P.
2018-01-01
Accurate prediction of the blade vibration stress is required to determine overall durability of fan blade design under Boundary Layer Ingestion (BLI) distorted flow environments. Traditional single blade modeling technique is incapable of representing accurate modeling for the entire rotor blade system subject to complex dynamic loading behaviors and vibrations in distorted flow conditions. A particular objective of our work was to develop a high-fidelity full-rotor aeromechanics analysis capability for a system subjected to a distorted inlet flow by applying cyclic symmetry finite element modeling methodology. This reduction modeling method allows computationally very efficient analysis using a small periodic section of the full rotor blade system. Experimental testing by the use of the 8-foot by 6-foot Supersonic Wind Tunnel Test facility at NASA Glenn Research Center was also carried out for the system designated as the Boundary Layer Ingesting Inlet/Distortion-Tolerant Fan (BLI2DTF) technology development. The results obtained from the present numerical modeling technique were evaluated with those of the wind tunnel experimental test, toward establishing a computationally efficient aeromechanics analysis modeling tool facilitating for analyses of the full rotor blade systems subjected to a distorted inlet flow conditions. Fairly good correlations were achieved hence our computational modeling techniques were fully demonstrated. The analysis result showed that the safety margin requirement set in the BLI2DTF fan blade design provided a sufficient margin with respect to the operating speed range.
Wilson, Kenneth L; Doswell, Jayfus T; Fashola, Olatokunbo S; Debeatham, Wayne; Darko, Nii; Walker, Travelyan M; Danner, Omar K; Matthews, Leslie R; Weaver, William L
2013-09-01
This study was to extrapolate potential roles of augmented reality goggles as a clinical support tool assisting in the reduction of preventable causes of death on the battlefield. Our pilot study was designed to improve medic performance in accurately placing a large bore catheter to release tension pneumothorax (prehospital setting) while using augmented reality goggles. Thirty-four preclinical medical students recruited from Morehouse School of Medicine performed needle decompressions on human cadaver models after hearing a brief training lecture on tension pneumothorax management. Clinical vignettes identifying cadavers as having life-threatening tension pneumothoraces as a consequence of improvised explosive device attacks were used. Study group (n = 13) performed needle decompression using augmented reality goggles whereas the control group (n = 21) relied solely on memory from the lecture. The two groups were compared according to their ability to accurately complete the steps required to decompress a tension pneumothorax. The medical students using augmented reality goggle support were able to treat the tension pneumothorax on the human cadaver models more accurately than the students relying on their memory (p < 0.008). Although the augmented reality group required more time to complete the needle decompression intervention (p = 0.0684), this did not reach statistical significance. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
NASA Technical Reports Server (NTRS)
Klassen, Steve; Bugbee, Bruce
2005-01-01
Accurate shortwave radiation data is critical to evapotranspiration (ET) models used for developing irrigation schedules to optimize crop production while saving water, minimizing fertilizer, herbicide, and pesticide applications, reducing soil erosion, and protecting surface and ground water quality. Low cost silicon cell pyranometers have proven to be sufficiently accurate and robust for widespread use in agricultural applications under unobstructed daylight conditions. More expensive thermopile pyranometers are required for use as calibration standards and measurements under light with unique spectral properties (electric lights, under vegetation, in greenhouses and growth chambers). Routine cleaning, leveling, and annual calibration checks will help to ensure the integrity of long-term data.
Modified energy cascade model adapted for a multicrop Lunar greenhouse prototype
NASA Astrophysics Data System (ADS)
Boscheri, G.; Kacira, M.; Patterson, L.; Giacomelli, G.; Sadler, P.; Furfaro, R.; Lobascio, C.; Lamantea, M.; Grizzaffi, L.
2012-10-01
Models are required to accurately predict mass and energy balances in a bioregenerative life support system. A modified energy cascade model was used to predict outputs of a multi-crop (tomatoes, potatoes, lettuce and strawberries) Lunar greenhouse prototype. The model performance was evaluated against measured data obtained from several system closure experiments. The model predictions corresponded well to those obtained from experimental measurements for the overall system closure test period (five months), especially for biomass produced (0.7% underestimated), water consumption (0.3% overestimated) and condensate production (0.5% overestimated). However, the model was less accurate when the results were compared with data obtained from a shorter experimental time period, with 31%, 48% and 51% error for biomass uptake, water consumption, and condensate production, respectively, which were obtained under more complex crop production patterns (e.g. tall tomato plants covering part of the lettuce production zones). These results, together with a model sensitivity analysis highlighted the necessity of periodic characterization of the environmental parameters (e.g. light levels, air leakage) in the Lunar greenhouse.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Jason; Winkler, Jon
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Active machine learning-driven experimentation to determine compound effects on protein patterns.
Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F
2016-02-03
High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance.
The Lord of the Rings - Deep Learning Craters on the Moon and Other Bodies
NASA Astrophysics Data System (ADS)
Silburt, Ari; Ali-Dib, Mohamad; Zhu, Chenchong; Jackson, Alan; Valencia, Diana; Kissin, Yevgeni; Tamayo, Daniel; Menou, Kristen
2018-01-01
Crater detection has traditionally been done via manual inspection of images, leading to statistically significant disagreements between scientists for the Moon's crater distribution. In addition, there are millions of uncategorized craters on the Moon and other Solar System bodies that will never be classified by humans due to the time required to manually detect craters. I will show that a deep learning model trained on the near-side of the Moon can successfully reproduce the crater distribution on the far-side, as well as detect thousands of small, new craters that were previously uncharacterized. In addition, this Moon-trained model can be transferred to accurately classify craters on Mercury. It is therefore likely that this model can be extended to classify craters on all Solar System bodies with Digital Elevation Maps. This will facilitate, for the first time ever, a systematic, accurate, and reproducible study of the crater records throughout the Solar System.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2014-06-01
Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.
Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.
2018-01-01
The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.
Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Epidemic predictions in an imperfect world: modelling disease spread with partial data
Dawson, Peter M.; Werkman, Marleen; Brooks-Pollock, Ellen; Tildesley, Michael J.
2015-01-01
‘Big-data’ epidemic models are being increasingly used to influence government policy to help with control and eradication of infectious diseases. In the case of livestock, detailed movement records have been used to parametrize realistic transmission models. While livestock movement data are readily available in the UK and other countries in the EU, in many countries around the world, such detailed data are not available. By using a comprehensive database of the UK cattle trade network, we implement various sampling strategies to determine the quantity of network data required to give accurate epidemiological predictions. It is found that by targeting nodes with the highest number of movements, accurate predictions on the size and spatial spread of epidemics can be made. This work has implications for countries such as the USA, where access to data is limited, and developing countries that may lack the resources to collect a full dataset on livestock movements. PMID:25948687
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
Modeling the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Xapsos, Michael A.
2006-01-01
There has been a renaissance of interest in space radiation environment modeling. This has been fueled by the growing need to replace long time standard AP-9 and AE-8 trapped particle models, the interplanetary exploration initiative, the modern satellite instrumentation that has led to unprecedented measurement accuracy, and the pervasive use of Commercial off the Shelf (COTS) microelectronics that require more accurate predictive capabilities. The objective of this viewgraph presentation was to provide basic understanding of the components of the space radiation environment and their variations, review traditional radiation effects application models, and present recent developments.
Dynamics Simulation Model for Space Tethers
NASA Technical Reports Server (NTRS)
Levin, E. M.; Pearson, J.; Oldson, J. C.
2006-01-01
This document describes the development of an accurate model for the dynamics of the Momentum Exchange Electrodynamic Reboost (MXER) system. The MXER is a rotating tether about 100-km long in elliptical Earth orbit designed to catch payloads in low Earth orbit and throw them to geosynchronous orbit or to Earth escape. To ensure successful rendezvous between the MXER tip catcher and a payload, a high-fidelity model of the system dynamics is required. The model developed here quantifies the major environmental perturbations, and can predict the MXER tip position to within meters over one orbit.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Real-time, haptics-enabled simulator for probing ex vivo liver tissue.
Lister, Kevin; Gao, Zhan; Desai, Jaydev P
2009-01-01
The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.
Solar EUV irradiance for space weather applications
NASA Astrophysics Data System (ADS)
Viereck, R. A.
2015-12-01
Solar EUV irradiance is an important driver of space weather models. Large changes in EUV and x-ray irradiances create large variability in the ionosphere and thermosphere. Proxies such as the F10.7 cm radio flux, have provided reasonable estimates of the EUV flux but as the space weather models become more accurate and the demands of the customers become more stringent, proxies are no longer adequate. Furthermore, proxies are often provided only on a daily basis and shorter time scales are becoming important. Also, there is a growing need for multi-day forecasts of solar EUV irradiance to drive space weather forecast models. In this presentation we will describe the needs and requirements for solar EUV irradiance information from the space weather modeler's perspective. We will then translate these requirements into solar observational requirements such as spectral resolution and irradiance accuracy. We will also describe the activities at NOAA to provide long-term solar EUV irradiance observations and derived products that are needed for real-time space weather modeling.
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.
ERIC Educational Resources Information Center
Zhu, Renbang; Yu, Feng; Liu, Su
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, Paulo Cesar
2016-09-01
Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.
Production rates for crews using hand tools on firelines
Lisa Haven; T. Parkin Hunter; Theodore G. Storey
1982-01-01
Reported rates at which hand crews construct firelines can vary widely because of differences in fuels, fire and measurement conditions, and fuel resistance-to-control classification schemes. Real-time fire dispatching and fire simulation planning models, however, require accurate estimates of hand crew productivity. Errors in estimating rate of fireline production...
Inferring landscape effects on gene flow: A new model selection framework
A. J. Shirk; D. O. Wallin; S. A. Cushman; C. G. Rice; K. I. Warheit
2010-01-01
Populations in fragmented landscapes experience reduced gene flow, lose genetic diversity over time and ultimately face greater extinction risk. Improving connectivity in fragmented landscapes is now a major focus of conservation biology. Designing effective wildlife corridors for this purpose, however, requires an accurate understanding of how landscapes shape gene...
Suicide Surveillance in the U.S. Military?Reporting and Classification Biases in Rate Calculations
ERIC Educational Resources Information Center
Carr, Joel R.; Hoge, Charles W.; Gardner, John; Potter, Robert
2004-01-01
The military has a well-defined population with suicide prevention programs that have been recognized as possible models for civilian suicide prevention efforts. Monitoring prevention programs requires accurate reporting. In civilian settings, several studies have confirmed problems in the reporting and classification of suicides. This analysis…
Measuring and Modeling Change in Examinee Effort on Low-Stakes Tests across Testing Occasions
ERIC Educational Resources Information Center
Sessoms, John; Finney, Sara J.
2015-01-01
Because schools worldwide use low-stakes tests to make important decisions, value-added indices computed from test scores must accurately reflect student learning, which requires equal test-taking effort across testing occasions. Evaluating change in effort assumes effort is measured equivalently across occasions. We evaluated the longitudinal…
USDA-ARS?s Scientific Manuscript database
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of watershed-scale nutrient load estimates...
Toward New Data and Information Management Solutions for Data-Intensive Ecological Research
ERIC Educational Resources Information Center
Laney, Christine Marie
2013-01-01
Ecosystem health is deteriorating in many parts of the world due to direct and indirect anthropogenic pressures. Generating accurate, useful, and impactful models of past, current, and future states of ecosystem structure and function is a complex endeavor that often requires vast amounts of data from multiple sources and knowledge from…
Adjusting slash pine growth and yield for silvicultural treatments
Stephen R. Logan; Barry D. Shiver
2006-01-01
With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...
Precise Determination of the Zero-Gravity Surface Figure of a Mirror without Gravity-Sag Modeling
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.; Lam, Jonathan C.; Feria, V. Alfonso; Chang, Zensheu
2007-01-01
The zero-gravity surface figure of optics used in spaceborne astronomical instruments must be known to high accuracy, but earthbound metrology is typically corrupted by gravity sag. Generally, inference of the zero-gravity surface figure from a measurement made under normal gravity requires finite-element analysis (FEA), and for accurate results the mount forces must be well characterized. We describe how to infer the zero-gravity surface figure very precisely using the alternative classical technique of averaging pairs of measurements made with the direction of gravity reversed. We show that mount forces as well as gravity must be reversed between the two measurements and discuss how the St. Venant principle determines when a reversed mount force may be considered to be applied at the same place in the two orientations. Our approach requires no finite-element modeling and no detailed knowledge of mount forces other than the fact that they reverse and are applied at the same point in each orientation. If mount schemes are suitably chosen, zero-gravity optical surfaces may be inferred much more simply and more accurately than with FEA.
Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals
NASA Technical Reports Server (NTRS)
de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.
2015-01-01
Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.
A New Calibration Method for Commercial RGB-D Sensors.
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-05-24
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
A multiscale model for predicting the viscoelastic properties of asphalt concrete
NASA Astrophysics Data System (ADS)
Garcia Cucalon, Lorena; Rahmani, Eisa; Little, Dallas N.; Allen, David H.
2016-08-01
It is well known that the accurate prediction of long term performance of asphalt concrete pavement requires modeling to account for viscoelasticity within the mastic. However, accounting for viscoelasticity can be costly when the material properties are measured at the scale of asphalt concrete. This is due to the fact that the material testing protocols must be performed recursively for each mixture considered for use in the final design.
Thermal Control of the Scientific Instrument Package in the Large Space Telescope
NASA Technical Reports Server (NTRS)
Hawks, K. H.
1972-01-01
The general thermal control system philosophy was to utilize passive control where feasible and to utilize active methods only where required for more accurate thermal control of the SIP components with narrow temperature tolerances. A thermal model of the SIP and a concept for cooling the SIP cameras are presented. The model and cooling concept have established a rationale for determining a Phase A baseline for SIP thermal control.
2006-09-01
Figure 17. Station line center of Magnus force vs. Mach number for spin-stabilized projectile...forces and moments on the projectile. It is also relatively easy to change the wind tunnel model to allow detailed parametric effects to be...such as pitch and roll damping, as well as, Magnus force and moment coefficients, are difficult to obtain in a wind tunnel and require a complex
The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems
2011-11-01
accurately predicting the supersonic magus effect about spinning cones, ogive- cylinders , and boat-tailed afterbodies. This work led to the successful...successful computer model of the proposed product or system, one can then build prototypes on the computer and study the effects on the performance of...needed. The NRC report discusses the requirements for effective use of such computing power. One needs “models, algorithms, software, hardware
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
PDB_REDO: automated re-refinement of X-ray structure models in the PDB.
Joosten, Robbie P; Salzemann, Jean; Bloch, Vincent; Stockinger, Heinz; Berglund, Ann-Charlott; Blanchet, Christophe; Bongcam-Rudloff, Erik; Combet, Christophe; Da Costa, Ana L; Deleage, Gilbert; Diarena, Matteo; Fabbretti, Roberto; Fettahi, Géraldine; Flegel, Volker; Gisel, Andreas; Kasam, Vinod; Kervinen, Timo; Korpelainen, Eija; Mattila, Kimmo; Pagni, Marco; Reichstadt, Matthieu; Breton, Vincent; Tickle, Ian J; Vriend, Gert
2009-06-01
Structural biology, homology modelling and rational drug design require accurate three-dimensional macromolecular coordinates. However, the coordinates in the Protein Data Bank (PDB) have not all been obtained using the latest experimental and computational methods. In this study a method is presented for automated re-refinement of existing structure models in the PDB. A large-scale benchmark with 16 807 PDB entries showed that they can be improved in terms of fit to the deposited experimental X-ray data as well as in terms of geometric quality. The re-refinement protocol uses TLS models to describe concerted atom movement. The resulting structure models are made available through the PDB_REDO databank (http://www.cmbi.ru.nl/pdb_redo/). Grid computing techniques were used to overcome the computational requirements of this endeavour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery coefficients in the reconstructed images. To avoid the appearance of ring-type artifacts, the number of iterations should be limited. In low magnification systems, the intrinsic detector PSF plays a major role in improvement of the image-quality parameters.« less
UAV Trajectory Modeling Using Neural Networks
NASA Technical Reports Server (NTRS)
Xue, Min
2017-01-01
Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural network should be able to predict the vehicle's future states at next time step. A complete 4-D trajectory are then generated step by step using the trained neural network. Experiments in this work show that the neural network can approximate the sUAV's model and predict the trajectory accurately.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
NASA Astrophysics Data System (ADS)
Chu, Huaqiang; Liu, Fengshan; Consalvi, Jean-Louis
2014-08-01
The relationship between the spectral line based weighted-sum-of-gray-gases (SLW) model and the full-spectrum k-distribution (FSK) model in isothermal and homogeneous media is investigated in this paper. The SLW transfer equation can be derived from the FSK transfer equation expressed in the k-distribution function without approximation. It confirms that the SLW model is equivalent to the FSK model in the k-distribution function form. The numerical implementation of the SLW relies on a somewhat arbitrary discretization of the absorption cross section whereas the FSK model finds the spectrally integrated intensity by integration over the smoothly varying cumulative-k distribution function using a Gaussian quadrature scheme. The latter is therefore in general more efficient as a fewer number of gray gases is required to achieve a prescribed accuracy. Sample numerical calculations were conducted to demonstrate the different efficiency of these two methods. The FSK model is found more accurate than the SLW model in radiation transfer in H2O; however, the SLW model is more accurate in media containing CO2 as the only radiating gas due to its explicit treatment of ‘clear gas.’
NASA Astrophysics Data System (ADS)
Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.
2017-06-01
This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.
NASA Astrophysics Data System (ADS)
Ebtehaj, Isa; Bonakdari, Hossein; Khoshbin, Fatemeh
2016-10-01
To determine the minimum velocity required to prevent sedimentation, six different models were proposed to estimate the densimetric Froude number (Fr). The dimensionless parameters of the models were applied along with a combination of the group method of data handling (GMDH) and the multi-target genetic algorithm. Therefore, an evolutionary design of the generalized GMDH was developed using a genetic algorithm with a specific coding scheme so as not to restrict connectivity configurations to abutting layers only. In addition, a new preserving mechanism by the multi-target genetic algorithm was utilized for the Pareto optimization of GMDH. The results indicated that the most accurate model was the one that used the volumetric concentration of sediment (CV), relative hydraulic radius (d/R), dimensionless particle number (Dgr) and overall sediment friction factor (λs) in estimating Fr. Furthermore, the comparison between the proposed method and traditional equations indicated that GMDH is more accurate than existing equations.
NASA Astrophysics Data System (ADS)
Liu, Dantong; Taylor, Jonathan W.; Young, Dominque E.; Flynn, Michael J.; Coe, Hugh; Allan, James D.
2015-01-01
of the impacts of brown carbon (BrC) requires accurate determination of its physical properties, but a model must be invoked to derive these from instrument data. Ambient measurements were made in London at a site influenced by traffic and solid fuel (principally wood) burning, apportioned by single particle soot photometer data and optical properties measured using multiwavelength photoacoustic spectroscopy. Two models were applied: a commonly used Mie model treating the particles as single-coated spheres and a Rayleigh-Debye-Gans approximation treating them as aggregates of smaller-coated monomers. The derived solid fuel BrC parameters at 405 nm were found to be highly sensitive to the model treatment, with a mass absorption cross section ranging from 0.47 to 1.81 m2/g and imaginary refractive index from 0.013 to 0.062. This demonstrates that a detailed knowledge of particle morphology must be obtained and invoked to accurately parameterize BrC properties based on aerosol phase measurements.
Wave propagation in equivalent continuums representing truss lattice materials
Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...
2015-07-29
Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less
Augmented kludge waveforms for detecting extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.
2017-08-01
The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
Cheng, Lei; Li, Yizeng; Grosh, Karl
2013-01-01
An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem. PMID:23729844
Cheng, Lei; Li, Yizeng; Grosh, Karl
2013-08-15
An approximate boundary condition is developed in this paper to model fluid shear viscosity at boundaries of coupled fluid-structure system. The effect of shear viscosity is approximated by a correction term to the inviscid boundary condition, written in terms of second order in-plane derivatives of pressure. Both thin and thick viscous boundary layer approximations are formulated; the latter subsumes the former. These approximations are used to develop a variational formation, upon which a viscous finite element method (FEM) model is based, requiring only minor modifications to the boundary integral contributions of an existing inviscid FEM model. Since this FEM formulation has only one degree of freedom for pressure, it holds a great computational advantage over the conventional viscous FEM formulation which requires discretization of the full set of linearized Navier-Stokes equations. The results from thick viscous boundary layer approximation are found to be in good agreement with the prediction from a Navier-Stokes model. When applicable, thin viscous boundary layer approximation also gives accurate results with computational simplicity compared to the thick boundary layer formulation. Direct comparison of simulation results using the boundary layer approximations and a full, linearized Navier-Stokes model are made and used to evaluate the accuracy of the approximate technique. Guidelines are given for the parameter ranges over which the accurate application of the thick and thin boundary approximations can be used for a fluid-structure interaction problem.
Limitations of gravity models in predicting the spread of Eurasian watermilfoil.
Rothlisberger, John D; Lodge, David M
2011-02-01
The effects of non-native invasive species are costly and environmentally damaging, and resources to slow their spread and reduce their effects are scarce. Models that accurately predict where new invasions will occur could guide the efficient allocation of resources to slow colonization. We assessed the accuracy of a model that predicts the probability of colonization of lakes in Wisconsin by Eurasian watermilfoil (Myriophyllum spicatum). We based this predictive model on 9 years (1990-1999) of sequence data of milfoil colonization of lakes larger than 25 ha (n =1803). We used milfoil colonization sequence data from 2000 to 2006 to test whether the model accurately predicted the number of lakes that actually were colonized from among the 200 lakes identified as being most likely to be colonized. We found that a lake's predicted probability of colonization was not correlated with whether a lake actually was colonized. Given the low predictability of colonization of specific lakes, we compared the efficacy of preventing milfoil from leaving occupied sites, which does not require predicting colonization probability, with protecting vacant sites from being colonized, which does require predicting colonization probability. Preventing organisms from leaving colonized sites reduced the likelihood of spread more than protecting vacant sites. Although we focused on the spread of a single species in a particular region, our results show the shortcomings of gravity models in predicting the spread of numerous non-native species to a variety of locations via a wide range of vectors. ©2010 Society for Conservation Biology.
Integrated Medical Model (IMM) 4.0 Enhanced Functionalities
NASA Technical Reports Server (NTRS)
Young, M.; Keenan, A. B.; Saile, L.; Boley, L. A.; Walton, M. E.; Shah, R. V.; Kerstman, E. L.; Myers, J. G.
2015-01-01
The Integrated Medical Model is a probabilistic simulation model that uses input data on 100 medical conditions to simulate expected medical events, the resources required to treat, and the resulting impact to the mission for specific crew and mission characteristics. The newest development version of IMM, IMM v4.0, adds capabilities that remove some of the conservative assumptions that underlie the current operational version, IMM v3. While IMM v3 provides the framework to simulate whether a medical event occurred, IMMv4 also simulates when the event occurred during a mission timeline. This allows for more accurate estimation of mission time lost and resource utilization. In addition to the mission timeline, IMMv4.0 features two enhancements that address IMM v3 assumptions regarding medical event treatment. Medical events in IMMv3 are assigned the untreated outcome if any resource required to treat the event was unavailable. IMMv4 allows for partially treated outcomes that are proportional to the amount of required resources available, thus removing the dichotomous treatment assumption. An additional capability IMMv4 is to use an alternative medical resource when the primary resource assigned to the condition is depleted, more accurately reflecting the real-world system. The additional capabilities defining IMM v4.0the mission timeline, partial treatment, and alternate drug result in more realistic predicted mission outcomes. The primary model outcomes of IMM v4.0 for the ISS6 mission, including mission time lost, probability of evacuation, and probability of loss of crew life, are be compared to those produced by the current operational version of IMM to showcase enhanced prediction capabilities.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Underwater Sound Propagation Modeling Methods for Predicting Marine Animal Exposure.
Hamm, Craig A; McCammon, Diana F; Taillefer, Martin L
2016-01-01
The offshore exploration and production (E&P) industry requires comprehensive and accurate ocean acoustic models for determining the exposure of marine life to the high levels of sound used in seismic surveys and other E&P activities. This paper reviews the types of acoustic models most useful for predicting the propagation of undersea noise sources and describes current exposure models. The severe problems caused by model sensitivity to the uncertainty in the environment are highlighted to support the conclusion that it is vital that risk assessments include transmission loss estimates with statistical measures of confidence.
Performance of commercial platforms for rapid genotyping of polymorphisms affecting warfarin dose.
King, Cristi R; Porche-Sorbet, Rhonda M; Gage, Brian F; Ridker, Paul M; Renaud, Yannick; Phillips, Michael S; Eby, Charles
2008-06-01
Initiation of warfarin therapy is associated with bleeding owing to its narrow therapeutic window and unpredictable therapeutic dose. Pharmacogenetic-based dosing algorithms can improve accuracy of initial warfarin dosing but require rapid genotyping for cytochrome P-450 2C9 (CYP2C9) *2 and *3 single nucleotide polymorphisms (SNPs) and a vitamin K epoxide reductase (VKORC1) SNP. We evaluated 4 commercial systems: INFINITI analyzer (AutoGenomics, Carlsbad, CA), Invader assay (Third Wave Technologies, Madison, WI), Tag-It Mutation Detection assay (Luminex Molecular Diagnostics, formerly Tm Bioscience, Toronto, Canada), and Pyrosequencing (Biotage, Uppsala, Sweden). We genotyped 112 DNA samples and resolved any discrepancies with bidirectional sequencing. The INFINITI analyzer was 100% accurate for all SNPs and required 8 hours. Invader and Tag-It were 100% accurate for CYP2C9 SNPs, 99% accurate for VKORC1 -1639/3673 SNP, and required 3 hours and 8 hours, respectively. Pyrosequencing was 99% accurate for CYP2C9 *2, 100% accurate for CYP2C9 *3, and 100% accurate for VKORC1 and required 4 hours. Current commercial platforms provide accurate and rapid genotypes for pharmacogenetic dosing during initiation of warfarin therapy.
Small Business Innovations (Helicopters)
NASA Technical Reports Server (NTRS)
1992-01-01
The amount of engine power required for a helicopter to hover is an important, but difficult, consideration in helicopter design. The EHPIC program model produces converged, freely distorted wake geometries that generate accurate analysis of wake-induced downwash, allowing good predictions of rotor thrust and power requirements. Continuum Dynamics, Inc., the Small Business Innovation Research (SBIR) company that developed EHPIC, also produces RotorCRAFT, a program for analysis of aerodynamic loading of helicopter blades in forward flight. Both helicopter codes have been licensed to commercial manufacturers.
Numerical solutions for heat flow in adhesive lap joints
NASA Technical Reports Server (NTRS)
Howell, P. A.; Winfree, William P.
1992-01-01
The present formulation for the modeling of heat transfer in thin, adhesively bonded lap joints precludes difficulties associated with large aspect ratio grids required by standard FEM formulations. This quasi-static formulation also reduces the problem dimensionality (by one), thereby minimizing computational requirements. The solutions obtained are found to be in good agreement with both analytical solutions and solutions from standard FEM programs. The approach is noted to yield a more accurate representation of heat-flux changes between layers due to a disbond.
NASA Technical Reports Server (NTRS)
Murray, John; Vernier, Jean-Paul; Fairlie, T. Duncan; Pavolonis, Michael; Krotkov, Nickolay A.; Lindsay, Francis; Haynes, John
2013-01-01
Although significant progress has been made in recent years, estimating volcanic ash concentration for the full extent of the airspace affected by volcanic ash remains a challenge. No single satellite, airborne or ground observing system currently exists which can sufficiently inform dispersion models to provide the degree of accuracy required to use them with a high degree of confidence for routing aircraft in and near volcanic ash. Toward this end, the detection and characterization of volcanic ash in the atmosphere may be substantially improved by integrating a wider array of observing systems and advancements in trajectory and dispersion modeling to help solve this problem. The qualitative aspect of this effort has advanced significantly in the past decade due to the increase of highly complementary observational and model data currently available. Satellite observations, especially when coupled with trajectory and dispersion models can provide a very accurate picture of the 3-dimensional location of ash clouds. The accurate estimate of the mass loading at various locations throughout the entire plume, however improving, remains elusive. This paper examines the capabilities of various satellite observation systems and postulates that model-based volcanic ash concentration maps and forecasts might be significantly improved if the various extant satellite capabilities are used together with independent, accurate mass loading data from other observing systems available to calibrate (tune) ash concentration retrievals from the satellite systems.
Parsons, Neal; Levin, Deborah A; van Duin, Adri C T; Zhu, Tong
2014-12-21
The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N2(Σg+1)-N2(Σg+1) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.
NASA Astrophysics Data System (ADS)
Bateni, S. M.; Xu, T.
2015-12-01
Accurate estimation of water and heat fluxes is required for irrigation scheduling, weather prediction, and water resources planning and management. A weak-constraint variational data assimilation (WC-VDA) scheme is developed to estimate water and heat fluxes by assimilating sequences of land surface temperature (LST) observations. The commonly used strong-constraint VDA systems adversely affect the accuracy of water and heat flux estimates as they assume the model is perfect. The WC-VDA approach accounts for structural and model errors and generates more accurate results via adding a model error term into the surface energy balance equation. The two key unknown parameters of the WC-VDA system (i.e., CHN, the bulk heat transfer coefficient and EF, evaporative fraction) and the model error term are optimized by minimizing the cost function. The WC-VDA model was tested at two sites with contrasting hydrological and vegetative conditions: the Daman site (a wet site located in an oasis area and covered by seeded corn) and the Huazhaizi site (a dry site located in a desert area and covered by sparse grass) in middle stream of Heihe river basin, northwest China. Compared to the strong-constraint VDA system, the WC-VDA method generates more accurate estimates of water and energy fluxes over the desert and oasis sites with dry and wet conditions.
The frequency response of dynamic friction: Enhanced rate-and-state models
NASA Astrophysics Data System (ADS)
Cabboi, A.; Putelat, T.; Woodhouse, J.
2016-07-01
The prediction and control of friction-induced vibration requires a sufficiently accurate constitutive law for dynamic friction at the sliding interface: for linearised stability analysis, this requirement takes the form of a frictional frequency response function. Systematic measurements of this frictional frequency response function are presented for small samples of nylon and polycarbonate sliding against a glass disc. Previous efforts to explain such measurements from a theoretical model have failed, but an enhanced rate-and-state model is presented which is shown to match the measurements remarkably well. The tested parameter space covers a range of normal forces (10-50 N), of sliding speeds (1-10 mm/s) and frequencies (100-2000 Hz). The key new ingredient in the model is the inclusion of contact stiffness to take into account elastic deformations near the interface. A systematic methodology is presented to discriminate among possible variants of the model, and then to identify the model parameter values.
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Astrophysics Data System (ADS)
Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.
2018-03-01
The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
Thermal Testing of Ablators in the NASA Johnson Space Center Radiant Heat Test Facility
NASA Technical Reports Server (NTRS)
Del Papa, Steven; Milhoan, Jim; Remark, Brian; Suess, Leonard
2016-01-01
A spacecraft's thermal protection system (TPS) is required to survive the harsh environment experienced during reentry. Accurate thermal modeling of the TPS is required to since uncertainties in the thermal response result in higher design margins and an increase in mass. The Radiant Heat Test Facility (RHTF) located at the NASA Johnson Space Center (JSC) replicates the reentry temperatures and pressures on system level full scale TPS test models for the validation of thermal math models. Reusable TPS, i.e. tile or reinforced carbon-carbon (RCC), have been the primary materials tested in the past. However, current capsule designs for MPCV and commercial programs have required the use of an ablator TPS. The RHTF has successfully completed a pathfinder program on avcoat ablator material to demonstrate the feasibility of ablator testing. The test results and corresponding ablation analysis results are presented in this paper.
NASA Astrophysics Data System (ADS)
Shubitidze, Fridon; Barrowes, Benjamin E.; Shamatava, Irma; Sigman, John; O'Neill, Kevin A.
2018-05-01
Processing electromagnetic induction signals from subsurface targets, for purposes of discrimination, requires accurate physical models. To date, successful approaches for on-land cases have entailed advanced modeling of responses by the targets themselves, with quite adequate treatment of instruments as well. Responses from the environment were typically slight and/or were treated very simply. When objects are immersed in saline solutions, however, more sophisticated modeling of the diffusive EMI physics in the environment is required. One needs to account for the response of the environment itself as well as the environment's frequency and time-dependent effects on both primary and secondary fields, from sensors and targets, respectively. Here we explicate the requisite physics and identify its effects quantitatively via analytical, numerical, and experimental investigations. Results provide a path for addressing the quandaries posed by previous underwater measurements and indicate how the environmental physics may be included in more successful processing.
NASA Astrophysics Data System (ADS)
Liu, Long; Liu, Wei
2018-04-01
A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.
Orlando, Alessandro; Levy, A Stewart; Carrick, Matthew M; Tanner, Allen; Mains, Charles W; Bar-Or, David
2017-11-01
To outline differences in neurosurgical intervention (NI) rates between intracranial hemorrhage (ICH) types in mild traumatic brain injuries and help identify which ICH types are most likely to benefit from creation of predictive models for NI. A multicenter retrospective study of adult patients spanning 3 years at 4 U.S. trauma centers was performed. Patients were included if they presented with mild traumatic brain injury (Glasgow Coma Scale score 13-15) with head CT scan positive for ICH. Patients were excluded for skull fractures, "unspecified hemorrhage," or coagulopathy. Primary outcome was NI. Stepwise multivariable logistic regression models were built to analyze the independent association between ICH variables and outcome measures. The study comprised 1876 patients. NI rate was 6.7%. There was a significant difference in rate of NI by ICH type. Subdural hematomas had the highest rate of NI (15.5%) and accounted for 78% of all NIs. Isolated subarachnoid hemorrhages had the lowest, nonzero, NI rate (0.19%). Logistic regression models identified ICH type as the most influential independent variable when examining NI. A model predicting NI for isolated subarachnoid hemorrhages would require 26,928 patients, but a model predicting NI for isolated subdural hematomas would require only 328 patients. This study highlighted disparate NI rates among ICH types in patients with mild traumatic brain injury and identified mild, isolated subdural hematomas as most appropriate for construction of predictive NI models. Increased health care efficiency will be driven by accurate understanding of risk, which can come only from accurate predictive models. Copyright © 2017 Elsevier Inc. All rights reserved.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
pyJac: Analytical Jacobian generator for chemical kinetics
NASA Astrophysics Data System (ADS)
Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen
2017-06-01
Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.
Report of the proceedings of the Colloquium and Workshop on Multiscale Coupled Modeling
NASA Technical Reports Server (NTRS)
Koch, Steven E. (Editor)
1993-01-01
The Colloquium and Workshop on Multiscale Coupled Modeling was held for the purpose of addressing modeling issues of importance to planning for the Cooperative Multiscale Experiment (CME). The colloquium presentations attempted to assess the current ability of numerical models to accurately simulate the development and evolution of mesoscale cloud and precipitation systems and their cycling of water substance, energy, and trace species. The primary purpose of the workshop was to make specific recommendations for the improvement of mesoscale models prior to the CME, their coupling with cloud, cumulus ensemble, hydrology, air chemistry models, and the observational requirements to initialize and verify these models.
Comparison of Turbulence Models for Nozzle-Afterbody Flows with Propulsive Jets
NASA Technical Reports Server (NTRS)
Compton, William B., III
1996-01-01
A numerical investigation was conducted to assess the accuracy of two turbulence models when computing non-axisymmetric nozzle-afterbody flows with propulsive jets. Navier-Stokes solutions were obtained for a Convergent-divergent non-axisymmetric nozzle-afterbody and its associated jet exhaust plume at free-stream Mach numbers of 0.600 and 0.938 at an angle of attack of 0 deg. The Reynolds number based on model length was approximately 20 x 10(exp 6). Turbulent dissipation was modeled by the algebraic Baldwin-Lomax turbulence model with the Degani-Schiff modification and by the standard Jones-Launder kappa-epsilon turbulence model. At flow conditions without strong shocks and with little or no separation, both turbulence models predicted the pressures on the surfaces of the nozzle very well. When strong shocks and massive separation existed, both turbulence models were unable to predict the flow accurately. Mixing of the jet exhaust plume and the external flow was underpredicted. The differences in drag coefficients for the two turbulence models illustrate that substantial development is still required for computing very complex flows before nozzle performance can be predicted accurately for all external flow conditions.
A fuzzy set preference model for market share analysis
NASA Technical Reports Server (NTRS)
Turksen, I. B.; Willson, Ian A.
1992-01-01
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction).
Hughesman, Curtis; Fakhfakh, Kareem; Bidshahri, Roza; Lund, H Louise; Haynes, Charles
2015-02-17
Advances in real-time polymerase chain reaction (PCR), as well as the emergence of digital PCR (dPCR) and useful modified nucleotide chemistries, including locked nucleic acids (LNAs), have created the potential to improve and expand clinical applications of PCR through their ability to better quantify and differentiate amplification products, but fully realizing this potential will require robust methods for designing dual-labeled hydrolysis probes and predicting their hybridization thermodynamics as a function of their sequence, chemistry, and template complementarity. We present here a nearest-neighbor thermodynamic model that accurately predicts the melting thermodynamics of a short oligonucleotide duplexed either to its perfect complement or to a template containing mismatched base pairs. The model may be applied to pure-DNA duplexes or to duplexes for which one strand contains any number and pattern of LNA substitutions. Perturbations to duplex stability arising from mismatched DNA:DNA or LNA:DNA base pairs are treated at the Gibbs energy level to maintain statistical significance in the regressed model parameters. This approach, when combined with the model's accounting of the temperature dependencies of the melting enthalpy and entropy, permits accurate prediction of T(m) values for pure-DNA homoduplexes or LNA-substituted heteroduplexes containing one or two independent mismatched base pairs. Terms accounting for changes in solution conditions and terminal addition of fluorescent dyes and quenchers are then introduced so that the model may be used to accurately predict and thereby tailor the T(m) of a pure-DNA or LNA-substituted hydrolysis probe when duplexed either to its perfect-match template or to a template harboring a noncomplementary base. The model, which builds on classic nearest-neighbor thermodynamics, should therefore be of use to clinicians and biologists who require probes that distinguish and quantify two closely related alleles in either a quantitative PCR or dPCR assay. This potential is demonstrated by using the model to design allele-specific probes that completely discriminate and quantify clinically relevant mutant alleles (BRAF V600E and KIT D816V) in a dPCR assay.
Tide Corrections for Coastal Altimetry: Status and Prospects
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Egbert, Gary D.
2008-01-01
Knowledge of global oceanic tides has markedly advanced over the last two decades, in no small part because of the near-global measurements provided by satellite altimeters, and especially the long and precise Topex/Poseidon time series e.g. [2]. Satellite altimetry in turn places very severe demands on the accuracy of tidal models. The reason is clear: tides are by far the largest contributor to the variance of sea-surface elevation, so any study of non-tidal ocean signals requires removal of this dominant tidal component. Efforts toward improving models for altimetric tide corrections have understandably focused on deep-water, open-ocean regions. These efforts have produced models thought to be generally accurate to about 2 cm rms. Corresponding tide predictions in shelf and near-coastal regions, however, are far less accurate. This paper discusses the status of our current abilities to provide near-global tidal predictions in shelf and near-coastal waters, highlights some of the difficulties that must be overcome, and attempts to divine a path toward some degree of progress. There are, of course, many groups worldwide who model tides over fairly localized shallow-water regions, and such work is extremely valuable for any altimeter study limited to those regions, but this paper considers the more global models necessary for the general user. There have indeed been efforts to patch local and global models together, but such work is difficult to maintain over many updates and can often encounter problems of proprietary or political nature. Such a path, however, might yet prove the most fruitful, and there are now new plans afoot to try again. As is well known, tides in shallow waters tend to be large, possibly nonlinear, and high wavenumber. The short spatial scales mean that current mapping capabilities with (multiple) nadir-oriented altimeters often yield inadequate coverage. This necessitates added reliance on numerical hydrodynamic models and data assimilation, which in turn necessitates very accurate bathymetry with high spatial resolution. Nonlinearity means that many additional compound tides and overtides must be accounted for in our predictions, which increases the degree of modeling effort and increases the amounts of data required to disentangle closely aliased tides.
Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors
NASA Technical Reports Server (NTRS)
VanOverbeke, Thomas J.
1998-01-01
The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.
NASA Astrophysics Data System (ADS)
Sattarpanah Karganroudi, Sasan
The competitive industrial market demands manufacturing companies to provide the markets with a higher quality of production. The quality control department in industrial sectors verifies geometrical requirements of products with consistent tolerances. These requirements are presented in Geometric Dimensioning and Tolerancing (GD&T) standards. However, conventional measuring and dimensioning methods for manufactured parts are time-consuming and costly. Nowadays manual and tactile measuring methods have been replaced by Computer-Aided Inspection (CAI) methods. The CAI methods apply improvements in computational calculations and 3-D data acquisition devices (scanners) to compare the scan mesh of manufactured parts with the Computer-Aided Design (CAD) model. Metrology standards, such as ASME-Y14.5 and ISO-GPS, require implementing the inspection in free-state, wherein the part is only under its weight. Non-rigid parts are exempted from the free-state inspection rule because of their significant geometrical deviation in a free-state with respect to the tolerances. Despite the developments in CAI methods, inspection of non-rigid parts still remains a serious challenge. Conventional inspection methods apply complex fixtures for non-rigid parts to retrieve the functional shape of these parts on physical fixtures; however, the fabrication and setup of these fixtures are sophisticated and expensive. The cost of fixtures has doubled since the client and manufacturing sectors require repetitive and independent inspection fixtures. To eliminate the need for costly and time-consuming inspection fixtures, fixtureless inspection methods of non-rigid parts based on CAI methods have been developed. These methods aim at distinguishing flexible deformations of parts in a free-state from defects. Fixtureless inspection methods are required to be automatic, reliable, reasonably accurate and repeatable for non-rigid parts with complex shapes. The scan model, which is acquired as point clouds, represent the shape of a part in a free-state. Afterward, the inspection of defects is performed by comparing the scan and CAD models, but these models are presented in different coordinate systems. Indeed, the scan model is presented in the measurement coordinate system whereas the CAD model is introduced in the designed coordinate system. To accomplish the inspection and facilitate an accurate comparison between the models, the registration process is required to align the scan and CAD models in a common coordinate system. The registration includes a virtual compensation for the flexible deformation of the parts in a free-state. Then, the inspection is implemented as a geometrical comparison between the CAD and scan models. This thesis focuses on developing automatic and accurate fixtureless CAI methods for non-rigid parts along with assessing the robustness of the methods. To this end, an automatic fixtureless CAI method for non-rigid parts based on filtering registration points is developed to identify and quantify defects more accurately on the surface of scan models. The flexible deformation of parts in a free-state in our developed automatic fixtureless CAI method is compensated by applying FE non-rigid Registration (FENR) to deform the CAD model towards the scan mesh. The displacement boundary conditions (BCs) for FENR are determined based on the corresponding sample points, which are generated by the Generalized Numerical Inspection Fixture (GNIF) method on the CAD and scan models. These corresponding sample points are evenly distributed on the surface of the models. The comparison between this deformed CAD model and the scan mesh intend to evaluate and quantify the defects on the scan model. However, some sample points can be located close or on defect areas which result in an inaccurate estimation of defects. These sample points are automatically filtered out in our CAI method based on curvature and von Mises stress criteria. Once filtered out, the remaining sample points are used in a new FENR, which allows an accurate evaluation of defects with respect to the tolerances. The performance and robustness of all CAI methods are generally required to be assessed with respect to the actual measurements. This thesis also introduces a new validation metric for Verification and Validation (V&V) of CAI methods based on ASME recommendations. The developed V&V approach uses a nonparametric statistical hypothesis test, namely the Kolmogorov-Smirnov (K-S) test. In addition to validating the defects size, the K-S test allows a deeper evaluation based on distance distribution of defects. The robustness of CAI method with respect to uncertainties such as scanning noise is quantitatively assessed using the developed validation metric. Due to the compliance of non-rigid parts, a geometrically deviated part can still be assembled in the assembly-state. This thesis also presents a fixtureless CAI method for geometrically deviated (presenting defects) non-rigid parts to evaluate the feasibility of mounting these parts in the functional assembly-state. Our developed Virtual Mounting Assembly-State Inspection (VMASI) method performs a non-rigid registration to virtually mount the scan mesh in assembly-state. To this end, the point clouds of scan model representing the part in a free-state is deformed to meet the assembly constraints such as fixation position (e.g. mounting holes). In some cases, the functional shape of a deviated part can be retrieved by applying assembly loads, which are limited to permissible loads, on the surface of the part. The required assembly loads are estimated through our developed Restraining Pressures Optimization (RPO) aiming at displacing the deviated scan model to achieve the tolerance for mounting holes. Therefore, the deviated scan model can be assembled if the mounting holes on the predicted functional shape of scan model attain the tolerance range. Different industrial parts are used to evaluate the performance of our developed methods in this thesis. The automatic inspection for identifying different types of small (local) and big (global) defects on the parts results in an accurate evaluation of defects. The robustness of this inspection method is also validated with respect to different levels of scanning noise, which shows promising results. Meanwhile, the VMASI method is performed on various parts with different types of defects, which concludes that in some cases the functional shape of deviated parts can be retrieved by mounting them on a virtual fixture in assembly-state under restraining loads.
Large scale nonlinear programming for the optimization of spacecraft trajectories
NASA Astrophysics Data System (ADS)
Arrieta-Camacho, Juan Jose
Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair
CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less
An Accurate and Computationally Efficient Model for Membrane-Type Circular-Symmetric Micro-Hotplates
Khan, Usman; Falconi, Christian
2014-01-01
Ideally, the design of high-performance micro-hotplates would require a large number of simulations because of the existence of many important design parameters as well as the possibly crucial effects of both spread and drift. However, the computational cost of FEM simulations, which are the only available tool for accurately predicting the temperature in micro-hotplates, is very high. As a result, micro-hotplate designers generally have no effective simulation-tools for the optimization. In order to circumvent these issues, here, we propose a model for practical circular-symmetric micro-hot-plates which takes advantage of modified Bessel functions, computationally efficient matrix-approach for considering the relevant boundary conditions, Taylor linearization for modeling the Joule heating and radiation losses, and external-region-segmentation strategy in order to accurately take into account radiation losses in the entire micro-hotplate. The proposed model is almost as accurate as FEM simulations and two to three orders of magnitude more computationally efficient (e.g., 45 s versus more than 8 h). The residual errors, which are mainly associated to the undesired heating in the electrical contacts, are small (e.g., few degrees Celsius for an 800 °C operating temperature) and, for important analyses, almost constant. Therefore, we also introduce a computationally-easy single-FEM-compensation strategy in order to reduce the residual errors to about 1 °C. As illustrative examples of the power of our approach, we report the systematic investigation of a spread in the membrane thermal conductivity and of combined variations of both ambient and bulk temperatures. Our model enables a much faster characterization of micro-hotplates and, thus, a much more effective optimization prior to fabrication. PMID:24763214
High order magnetic optics for high dynamic range proton radiography at a kinetic energy of 800 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjue, S. K. L., E-mail: sjue@lanl.gov; Mariam, F. G.; Merrill, F. E.
2016-01-15
Flash radiography with 800 MeV kinetic energy protons at Los Alamos National Laboratory is an important experimental tool for investigations of dynamic material behavior driven by high explosives or pulsed power. The extraction of quantitative information about density fields in a dynamic experiment from proton generated images requires a high fidelity model of the proton imaging process. It is shown that accurate calculations of the transmission through the magnetic lens system require terms beyond second order for protons far from the tune energy. The approach used integrates the correlated multiple Coulomb scattering distribution simultaneously over the collimator and the imagemore » plane. Comparison with a series of static calibration images demonstrates the model’s accurate reproduction of both the transmission and blur over a wide range of tune energies in an inverse identity lens that consists of four quadrupole electromagnets.« less
High order magnetic optics for high dynamic range proton radiography at a kinetic energy 800 MeV
Sjue, Sky K. L.; Morris, Christopher L.; Merrill, Frank Edward; ...
2016-01-14
Flash radiography with 800 MeV kinetic energy protons at Los Alamos National Laboratory is an important experimental tool for investigations of dynamic material behavior driven by high explosives or pulsed power. The extraction of quantitative information about density fields in a dynamic experiment from proton generated images requires a high fidelity model of the protonimaging process. It is shown that accurate calculations of the transmission through the magnetic lens system require terms beyond second order for protons far from the tune energy. The approach used integrates the correlated multiple Coulomb scattering distribution simultaneously over the collimator and the image plane.more » Furthermore, comparison with a series of static calibrationimages demonstrates the model’s accurate reproduction of both the transmission and blur over a wide range of tune energies in an inverse identity lens that consists of four quadrupole electromagnets.« less
Lattice Boltzmann model for simulation of magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William
1991-01-01
A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.
Satellite Communication Hardware Emulation System (SCHES)
NASA Technical Reports Server (NTRS)
Kaplan, Ted
1993-01-01
Satellite Communication Hardware Emulator System (SCHES) is a powerful simulator that emulates the hardware used in TDRSS links. SCHES is a true bit-by-bit simulator that models communications hardware accurately enough to be used as a verification mechanism for actual hardware tests on user spacecraft. As a credit to its modular design, SCHES is easily configurable to model any user satellite communication link, though some development may be required to tailor existing software to user specific hardware.
Investigation of Periodic Pitching through the Static Stall Angle of Attack.
1987-03-01
been completed to characterize and predict the dynamic stall process. In 1968 Ham (Ref 11) completed a study to explain the torsional oscillation of...peak values of l.:t and moment could be predicted accurately, but the model did not predict when the peaks would occur. Another problem with the...model was that it required input from experimental results to tell when leading edge vortex separation occurred. The prediction of when vortex shedding
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion
NASA Astrophysics Data System (ADS)
Shavalikul, Akamol
In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
ERIC Educational Resources Information Center
Gohara, Sabry; Shapiro, Joseph I.; Jacob, Adam N.; Khuder, Sadik A.; Gandy, Robyn A.; Metting, Patricia J.; Gold, Jeffrey; Kleshinski, James; and James Kleshinski
2011-01-01
The purpose of this study was to evaluate whether models based on pre-admission testing, including performance on the Medical College Admission Test (MCAT), performance on required courses in the medical school curriculum, or a combination of both could accurately predict performance of medical students on the United States Medical Licensing…
Todd A. Schroeder; Robbie Hember; Nicholas C. Coops; Shunlin Liang
2009-01-01
The magnitude and distribution of incoming shortwave solar radiation (SW) has significant influence on the productive capacity of forest vegetation. Models that estimate forest productivity require accurate and spatially explicit radiation surfaces that resolve both long- and short-term temporal climatic patterns and that account for topographic variability of the land...
An imputed forest composition map for New England screened by species range boundaries
Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler Wilson
2015-01-01
Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...
Intelligent Control of Flexible-Joint Robotic Manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Gallegos, G.
1997-01-01
This paper considers the trajectory tracking problem for uncertain rigid-link. flexible.joint manipulators, and presents a new intelligent controller as a solution to this problem. The proposed control strategy is simple and computationally efficient, requires little information concerning either the manipulator or actuator/transmission models and ensures uniform boundedness of all signals and arbitrarily accurate task-space trajectory tracking.
Xinhau Zhour; James R. Brandle; Michele M. Schoeneberger; Tala Awada
2007-01-01
Multiple-stemmed tree species are often used in agricultural settings, playing a significant role in natural resource conservation and carbon sequestration. Biomass estimation, whether for modeling growth under different climate scenarios, accounting for carbon sequestered, or inclusion in natural resource inventories, requires equations that can accurately describe...
USDA-ARS?s Scientific Manuscript database
Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
Zhao, Huawei; Crozier, Stuart; Liu, Feng
2002-12-01
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.
Hovering hummingbird wing aerodynamics during the annual cycle. I. Complete wing
Sapir, Nir; Elimelech, Yossef
2017-01-01
The diverse hummingbird family (Trochilidae) has unique adaptations for nectarivory, among which is the ability to sustain hover-feeding. As hummingbirds mainly feed while hovering, it is crucial to maintain this ability throughout the annual cycle—especially during flight-feather moult, in which wing area is reduced. To quantify the aerodynamic characteristics and flow mechanisms of a hummingbird wing throughout the annual cycle, time-accurate aerodynamic loads and flow field measurements were correlated over a dynamically scaled wing model of Anna’s hummingbird (Calypte anna). We present measurements recorded over a model of a complete wing to evaluate the baseline aerodynamic characteristics and flow mechanisms. We found that the vorticity concentration that had developed from the wing’s leading-edge differs from the attached vorticity structure that was typically found over insects’ wings; firstly, it is more elongated along the wing chord, and secondly, it encounters high levels of fluctuations rather than a steady vortex. Lift characteristics resemble those of insects; however, a 20% increase in the lift-to-torque ratio was obtained for the hummingbird wing model. Time-accurate aerodynamic loads were also used to evaluate the time-evolution of the specific power required from the flight muscles, and the overall wingbeat power requirements nicely matched previous studies. PMID:28878971
Hovering hummingbird wing aerodynamics during the annual cycle. I. Complete wing.
Achache, Yonathan; Sapir, Nir; Elimelech, Yossef
2017-08-01
The diverse hummingbird family (Trochilidae) has unique adaptations for nectarivory, among which is the ability to sustain hover-feeding. As hummingbirds mainly feed while hovering, it is crucial to maintain this ability throughout the annual cycle-especially during flight-feather moult, in which wing area is reduced. To quantify the aerodynamic characteristics and flow mechanisms of a hummingbird wing throughout the annual cycle, time-accurate aerodynamic loads and flow field measurements were correlated over a dynamically scaled wing model of Anna's hummingbird ( Calypte anna ). We present measurements recorded over a model of a complete wing to evaluate the baseline aerodynamic characteristics and flow mechanisms. We found that the vorticity concentration that had developed from the wing's leading-edge differs from the attached vorticity structure that was typically found over insects' wings; firstly, it is more elongated along the wing chord, and secondly, it encounters high levels of fluctuations rather than a steady vortex. Lift characteristics resemble those of insects; however, a 20% increase in the lift-to-torque ratio was obtained for the hummingbird wing model. Time-accurate aerodynamic loads were also used to evaluate the time-evolution of the specific power required from the flight muscles, and the overall wingbeat power requirements nicely matched previous studies.
Precision GPS ephemerides and baselines
NASA Technical Reports Server (NTRS)
1992-01-01
The required knowledge of the Global Positioning System (GPS) satellite position accuracy can vary depending on a particular application. Application to relative positioning of receiver locations on the ground to infer Earth's tectonic plate motion requires the most accurate knowledge of the GPS satellite orbits. Research directed towards improving and evaluating the accuracy of GPS satellite orbits was conducted at the University of Texas Center for Space Research (CSR). Understanding and modeling the forces acting on the satellites was a major focus of the research. Other aspects of orbit determination, such as the reference frame, time system, measurement modeling, and parameterization, were also investigated. Gravitational forces were modeled by truncated versions of extant gravity fields such as, Goddard Earth Model (GEM-L2), GEM-T1, TEG-2, and third body perturbations due to the Sun and Moon. Nongravitational forces considered were the solar radiation pressure, and perturbations due to thermal venting and thermal imbalance. At the GPS satellite orbit accuracy level required for crustal dynamic applications, models for the nongravitational perturbation play a critical role, since the gravitational forces are well understood and are modeled adequately for GPS satellite orbits.
3D gut-liver chip with a PK model for prediction of first-pass metabolism.
Lee, Dong Wook; Ha, Sang Keun; Choi, Inwook; Sung, Jong Hwan
2017-11-07
Accurate prediction of first-pass metabolism is essential for improving the time and cost efficiency of drug development process. Here, we have developed a microfluidic gut-liver co-culture chip that aims to reproduce the first-pass metabolism of oral drugs. This chip consists of two separate layers for gut (Caco-2) and liver (HepG2) cell lines, where cells can be co-cultured in both 2D and 3D forms. Both cell lines were maintained well in the chip, verified by confocal microscopy and measurement of hepatic enzyme activity. We investigated the PK profile of paracetamol in the chip, and corresponding PK model was constructed, which was used to predict PK profiles for different chip design parameters. Simulation results implied that a larger absorption surface area and a higher metabolic capacity are required to reproduce the in vivo PK profile of paracetamol more accurately. Our study suggests the possibility of reproducing the human PK profile on a chip, contributing to accurate prediction of pharmacological effect of drugs.
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Time Manager Software for a Flight Processor
NASA Technical Reports Server (NTRS)
Zoerne, Roger
2012-01-01
Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.
The cooling rates of pahoehoe flows: The importance of lava porosity
NASA Technical Reports Server (NTRS)
Jones, Alun C.
1993-01-01
Many theoretical models have been put forward to account for the cooling history of a lava flow; however, only limited detailed field data exist to validate these models. To accurately model the cooling of lava flows, data are required, not only on the heat loss mechanisms, but also on the surface skin development and the causes of differing cooling rates. This paper argues that the cause of such variations in the cooling rates are attributed, primarily, to the vesicle content and degassing history of the lava.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
NASA Astrophysics Data System (ADS)
Ciminelli, Caterina; Dell'Olio, Francesco; Armenise, Mario N.; Iacomacci, Francesco; Pasquali, Franca; Formaro, Roberto
2017-11-01
A fiber optic digital link for on-board data handling is modeled, designed and optimized in this paper. Design requirements and constraints relevant to the link, which is in the frame of novel on-board processing architectures, are discussed. Two possible link configurations are investigated, showing their advantages and disadvantages. An accurate mathematical model of each link component and the entire system is reported and results of link simulation based on those models are presented. Finally, some details on the optimized design are provided.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
A Smoluchowski model of crystallization dynamics of small colloidal clusters
NASA Astrophysics Data System (ADS)
Beltran-Villegas, Daniel J.; Sehgal, Ray M.; Maroudas, Dimitrios; Ford, David M.; Bevan, Michael A.
2011-10-01
We investigate the dynamics of colloidal crystallization in a 32-particle system at a fixed value of interparticle depletion attraction that produces coexisting fluid and solid phases. Free energy landscapes (FELs) and diffusivity landscapes (DLs) are obtained as coefficients of 1D Smoluchowski equations using as order parameters either the radius of gyration or the average crystallinity. FELs and DLs are estimated by fitting the Smoluchowski equations to Brownian dynamics (BD) simulations using either linear fits to locally initiated trajectories or global fits to unbiased trajectories using Bayesian inference. The resulting FELs are compared to Monte Carlo Umbrella Sampling results. The accuracy of the FELs and DLs for modeling colloidal crystallization dynamics is evaluated by comparing mean first-passage times from BD simulations with analytical predictions using the FEL and DL models. While the 1D models accurately capture dynamics near the free energy minimum fluid and crystal configurations, predictions near the transition region are not quantitatively accurate. A preliminary investigation of ensemble averaged 2D order parameter trajectories suggests that 2D models are required to capture crystallization dynamics in the transition region.
Computer graphics testbed to simulate and test vision systems for space applications
NASA Technical Reports Server (NTRS)
Cheatham, John B.; Wu, Chris K.; Lin, Y. H.
1991-01-01
A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
POD/DEIM reduced-order strategies for efficient four dimensional variational data assimilation
NASA Astrophysics Data System (ADS)
Ştefănescu, R.; Sandu, A.; Navon, I. M.
2015-08-01
This work studies reduced order modeling (ROM) approaches to speed up the solution of variational data assimilation problems with large scale nonlinear dynamical models. It is shown that a key requirement for a successful reduced order solution is that reduced order Karush-Kuhn-Tucker conditions accurately represent their full order counterparts. In particular, accurate reduced order approximations are needed for the forward and adjoint dynamical models, as well as for the reduced gradient. New strategies to construct reduced order based are developed for proper orthogonal decomposition (POD) ROM data assimilation using both Galerkin and Petrov-Galerkin projections. For the first time POD, tensorial POD, and discrete empirical interpolation method (DEIM) are employed to develop reduced data assimilation systems for a geophysical flow model, namely, the two dimensional shallow water equations. Numerical experiments confirm the theoretical framework for Galerkin projection. In the case of Petrov-Galerkin projection, stabilization strategies must be considered for the reduced order models. The new reduced order shallow water data assimilation system provides analyses similar to those produced by the full resolution data assimilation system in one tenth of the computational time.
Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling
NASA Astrophysics Data System (ADS)
Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean
2018-01-01
Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM predictions of ocean current speed were generally more accurate than BRAN, BRAN predictions of both ocean current speed and direction were more accurate than HYCOM along the southeast coast of Australia and Tasmania. This study identified important inaccuracies in the hydrodynamic models' estimations of the real ocean parameters and on time scales relevant to larval dispersal studies. These findings highlight the importance of the choice and validation of hydrodynamic models, and calls for estimates of such bias to be incorporated in dispersal studies.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
A New Calibration Method for Commercial RGB-D Sensors
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-01-01
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Revisiting low-fidelity two-fluid models for gas–solids transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adeleke, Najeem, E-mail: najm@psu.edu; Adewumi, Michael, E-mail: m2a@psu.edu; Ityokumbul, Thaddeus
Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The modelmore » equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.« less
An interfacial mechanism for cloud droplet formation on organic aerosols
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
An interfacial mechanism for cloud droplet formation on organic aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
An interfacial mechanism for cloud droplet formation on organic aerosols.
Ruehl, Christopher R; Davies, James F; Wilson, Kevin R
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depression by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation. Copyright © 2016, American Association for the Advancement of Science.
Elementary solutions of coupled model equations in the kinetic theory of gases
NASA Technical Reports Server (NTRS)
Kriese, J. T.; Siewert, C. E.; Chang, T. S.
1974-01-01
The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.
Modelling sub-daily evaporation from a small reservoir.
NASA Astrophysics Data System (ADS)
McGloin, Ryan; McGowan, Hamish; McJannet, David; Burn, Stewart
2013-04-01
Accurate quantification of evaporation from small water storages is essential for water management and is also required as input in some regional hydrological and meteorological models. Global estimates of the number of small storages or lakes (< 0.1 kilometers) are estimated to be in the order of 300 million (Downing et al., 2006). However, direct evaporation measurements at small reservoirs using the eddy covariance or scintillometry techniques have been limited due to their expensive and complex nature. To correctly represent the effect that small water bodies have on the regional hydrometeorology, reliable estimates of sub-daily evaporation are necessary. However, evaporation modelling studies at small reservoirs have so far been limited to quantifying daily estimates. In order to ascertain suitable methods for accurately modelling hourly evaporation from a small reservoir, this study compares evaporation results measured by the eddy covariance method at a small reservoir in southeast Queensland, Australia, to results from several modelling approaches using both over-water and land-based meteorological measurements. Accurate predictions of hourly evaporation were obtained by a simple theoretical mass transfer model requiring only over-water measurements of wind speed, humidity and water surface temperature. An evaporation model that was recently developed for use in small reservoir environments by Granger and Hedstrom (2011), appeared to overestimate the impact stability had on evaporation. While evaporation predictions made by the 1-dimensional hydrodynamics model, DYRESM (Dynamic Reservoir Simulation Model) (Imberger and Patterson, 1981), showed reasonable agreement with measured values. DYRESM did not show any substantial improvement in evaporation prediction when inflows and out flows were included and only a slighter better correlation was shown when over-water meteorological measurements were used in place of land-based measurements. Downing, J. A., Y. T. Prairie, J. J. Cole, C. M. Duarte, L. J. Tranvik, R. G. Striegl, W. H. McDowell, P. Kortelainen, N. F. Caraco, J. M. Melack and J. J. Middelburg (2006), The global abundance and size distribution of lakes, ponds, and impoundments, Limnology and Oceanography, 51, 2388-2397. Granger, R.J. and N. Hedstrom (2011), Modelling hourly rates of evaporation from small lakes, Hydrological and Earth System Sciences, 15, doi:10.5194/hess-15-267-2011. Imberger, J. and J.C. Patterson (1981), Dynamic Reservoir Simulation Model - DYRESM: 5, In: Transport Models for Inland and Coastal Waters. H.B. Fischer (Ed.). Academic Press, New York, 310-361.
NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Mathematical Models of Continuous Flow Electrophoresis
NASA Technical Reports Server (NTRS)
Saville, D. A.; Snyder, R. S.
1985-01-01
Development of high resolution continuous flow electrophoresis devices ultimately requires comprehensive understanding of the ways various phenomena and processes facilitate or hinder separation. A comprehensive model of the actual three dimensional flow, temperature and electric fields was developed to provide guidance in the design of electrophoresis chambers for specific tasks and means of interpreting test data on a given chamber. Part of the process of model development includes experimental and theoretical studies of hydrodynamic stability. This is necessary to understand the origin of mixing flows observed with wide gap gravitational effects. To insure that the model accurately reflects the flow field and particle motion requires extensive experimental work. Another part of the investigation is concerned with the behavior of concentrated sample suspensions with regard to sample stream stability particle-particle interactions which might affect separation in an electric field, especially at high field strengths. Mathematical models will be developed and tested to establish the roles of the various interactions.
Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael
2008-01-01
Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687
NASA Astrophysics Data System (ADS)
Satyaramesh, P. V.; RadhaKrishna, C.
2013-06-01
A generalized pricing structure for procurement of power under frequency ancillary service is developed in this paper. It is a frequency linked-price model and suitable for deregulation market environment. This model takes into consideration: governor characteristics and frequency characteristics of generator as additional parameters in load flow method. The main objective of the new approach proposed in this paper is to establish bidding price structure for frequency regulation services in competitive ancillary electrical markets under steady state condition. Lot of literatures are available for calculating the frequency deviations with respect to load changes by using dynamic simulation methods. But in this paper, the model computes the frequency deviations for additional requirements of power under steady state with considering power system network topology. An attempt is also made in this paper to develop optimal bidding price structure for the frequency-regulated systems. It gives a signal to traders or bidders that the power demand can be assessed more accurately much closer to real time and helps participants bid more accurate quantities on day-ahead market. The recent trends of frequency linked-price model existing in Indian power systems issues required for attention are also dealt in this paper. Test calculations have been performed on 30-bus system. The paper also explains adoptability of 33 this model to practical Indian power system. The results presented are analyzed and useful conclusions are drawn.
NASA Astrophysics Data System (ADS)
White, R. D.; Cocks, D.; Boyle, G.; Casey, M.; Garland, N.; Konovalov, D.; Philippa, B.; Stokes, P.; de Urquijo, J.; González-Magaña, O.; McEachran, R. P.; Buckman, S. J.; Brunger, M. J.; Garcia, G.; Dujko, S.; Petrovic, Z. Lj
2018-05-01
Accurate modelling of electron transport in plasmas, plasma-liquid and plasma-tissue interactions requires (i) the existence of accurate and complete sets of cross-sections, and (ii) an accurate treatment of electron transport in these gaseous and soft-condensed phases. In this study we present progress towards the provision of self-consistent electron-biomolecule cross-section sets representative of tissue, including water and THF, by comparison of calculated transport coefficients with those measured using a pulsed-Townsend swarm experiment. Water–argon mixtures are used to assess the self-consistency of the electron-water vapour cross-section set proposed in de Urquijo et al (2014 J. Chem. Phys. 141 014308). Modelling of electron transport in liquids and soft-condensed matter is considered through appropriate generalisations of Boltzmann’s equation to account for spatial-temporal correlations and screening of the electron potential. The ab initio formalism is applied to electron transport in atomic liquids and compared with available experimental swarm data for these noble liquids. Issues on the applicability of the ab initio formalism for krypton are discussed and addressed through consideration of the background energy of the electron in liquid krypton. The presence of self-trapping (into bubble/cluster states/solvation) in some liquids requires a reformulation of the governing Boltzmann equation to account for the combined localised–delocalised nature of the resulting electron transport. A generalised Boltzmann equation is presented which is highlighted to produce dispersive transport observed in some liquid systems.
Validity of the Born approximation for beyond Gaussian weak lensing observables
Petri, Andrea; Haiman, Zoltan; May, Morgan
2017-06-06
Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N-body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show thatmore » deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O(Φ 3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O(Φ 4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5σ. Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.« less
Validity of the Born approximation for beyond Gaussian weak lensing observables
NASA Astrophysics Data System (ADS)
Petri, Andrea; Haiman, Zoltán; May, Morgan
2017-06-01
Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N -body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show that deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O (Φ3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O (Φ4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5 σ . Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.
A COTS-Based Attitude Dependent Contact Scheduling System
NASA Technical Reports Server (NTRS)
DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark
2006-01-01
The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories. PMID:26630170
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories.
NASA Astrophysics Data System (ADS)
Nolan, G.; Pinardi, N.; Vukicevic, T.; Le Traon, P. Y.; Fernandez, V.
2016-02-01
Ocean observations are critical to providing accurate ocean forecasts that support operational decision making in European open and coastal seas. Observations are available in many forms from Fixed platforms e.g. Moored Buoys and tide gauges, underway measurements from Ferrybox systems, High Frequency radars and more recently from underwater Gliders and profiling floats. Observing System Simulation Experiments have been conducted to examine the relative contribution of each type of platform to an improvement in our ability to accurately forecast the future state of the ocean with HF radar and Gliders showing particular promise in improving model skill. There is considerable demand for ecosystem products and services from today's ocean observing system and biogeochemical observations are still relatively sparse particularly in coastal and shelf seas. There is a need to widen the techniques used to assess the fitness for purpose and gaps in the ocean observing system. As well as Observing System Simulation Experiments that quantify the effect of observations on the overall model skill we present a gap analysis based on (1) Examining where high model skill is required based on a marine spatial planning analysis of European seas i.e where does activity take place that requires more accurate forecasts? and (2) assessing gaps based on the capacity of the observing system to answer key societal challenges e.g. site suitability for aquaculture and ocean energy, oil spill response and contextual oceanographic products for fisheries and ecosystems. The broad based analysis will inform the development of the proposed European Ocean Observing System as a contribution to the Global Ocean Observing System (GOOS).
NASA Astrophysics Data System (ADS)
Shi, Y.; Davis, K. J.; Zhang, F.; Duffy, C.; Yu, X.
2014-12-01
A coupled physically based land surface hydrologic model, Flux-PIHM, has been developed by incorporating a land surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Flux-PIHM has been implemented and manually calibrated at the Shale Hills watershed (0.08 km2) in central Pennsylvania. Model predictions of discharge, point soil moisture, point water table depth, sensible and latent heat fluxes, and soil temperature show good agreement with observations. When calibrated only using discharge, and soil moisture and water table depth at one point, Flux-PIHM is able to resolve the observed 101 m scale soil moisture pattern at the Shale Hills watershed when an appropriate map of soil hydraulic properties is provided. A Flux-PIHM data assimilation system has been developed by incorporating EnKF for model parameter and state estimation. Both synthetic and real data assimilation experiments have been performed at the Shale Hills watershed. Synthetic experiment results show that the data assimilation system is able to simultaneously provide accurate estimates of multiple parameters. In the real data experiment, the EnKF estimated parameters and manually calibrated parameters yield similar model performances, but the EnKF method significantly decreases the time and labor required for calibration. The data requirements for accurate Flux-PIHM parameter estimation via data assimilation using synthetic observations have been tested. Results show that by assimilating only in situ outlet discharge, soil water content at one point, and the land surface temperature averaged over the whole watershed, the data assimilation system can provide an accurate representation of watershed hydrology. Observations of these key variables are available with national and even global spatial coverage (e.g., MODIS surface temperature, SMAP soil moisture, and the USGS gauging stations). National atmospheric reanalysis products, soil databases and land cover databases (e.g., NLDAS-2, SSURGO, NLCD) can provide high resolution forcing and input data. Therefore the Flux-PIHM data assimilation system could be readily expanded to other watersheds to provide regional scale land surface and hydrologic reanalysis with high spatial temporal resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
Genomic signals of selection predict climate-driven population declines in a migratory bird.
Bay, Rachael A; Harrigan, Ryan J; Underwood, Vinh Le; Gibbs, H Lisle; Smith, Thomas B; Ruegg, Kristen
2018-01-05
The ongoing loss of biodiversity caused by rapid climatic shifts requires accurate models for predicting species' responses. Despite evidence that evolutionary adaptation could mitigate climate change impacts, evolution is rarely integrated into predictive models. Integrating population genomics and environmental data, we identified genomic variation associated with climate across the breeding range of the migratory songbird, yellow warbler ( Setophaga petechia ). Populations requiring the greatest shifts in allele frequencies to keep pace with future climate change have experienced the largest population declines, suggesting that failure to adapt may have already negatively affected populations. Broadly, our study suggests that the integration of genomic adaptation can increase the accuracy of future species distribution models and ultimately guide more effective mitigation efforts. Copyright © 2018, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, M.T.; Segal, H.M.
1994-06-01
A new complex source microcomputer model has been developed for use at civil airports and Air Force bases. This paper describes both the key features of this model and its application in evaluating the air quality impact of new construction projects at three airports: one in the United States and two in Canada. The single EDMS model replaces the numerous models previously required to assess the air quality impact of pollution sources at airports. EDMS also employs a commercial data base to reduce the time and manpower required to accurately assess and document the air quality impact of airfield operations.more » On July 20, 1993, the U.S. Environmental Protection Agency (EPA) issued the final rule (Federal Register, 7/20/93, page 38816) to add new models to the Guideline on Air Quality Models. At that time EDMS was incorporated into the Guideline as an Appendix A model. 12 refs., 4 figs., 1 tab.« less
Models to teach lung sonopathology and ultrasound-guided thoracentesis.
Wojtczak, Jacek A
2014-12-01
Lung sonography allows rapid diagnosis of lung emergencies such as pulmonary edema, hemothorax or pneumothorax. The ability to timely diagnose an intraoperative pneumothorax is an important skill for the anesthesiologist. However, lung ultrasound exams require an interpretation of not only real images but also complex acoustic artifacts such as A-lines and B-lines. Therefore, appropriate training to gain proficiency is important. Simulated environment using ultrasound phantom models allows controlled, supervised learning. We have developed hybrid models that combine dry or wet polyurethane foams, porcine rib cages and human hand simulating a rib cage. These models simulate fairly accurately pulmonary sonopathology and allow supervised teaching of lung sonography with the immediate feedback. In-vitro models can also facilitate learning of procedural skills, improving transducer and needle positioning and movement, rapid recognition of thoracic anatomy and hand - eye coordination skills. We described a new model to teach an ultrasound guided thoracentesis. This model consists of the experimenter's hand placed on top of the water-filled container with a wet foam. Metacarpal bones of the human hand simulate a rib cage and a wet foam simulates a diseased lung immersed in the pleural fluid. Positive fluid flow offers users feedback when a simulated pleural effusion is accurately assessed.
Vector fields in a tight laser focus: comparison of models.
Peatross, Justin; Berrondo, Manuel; Smith, Dallas; Ware, Michael
2017-06-26
We assess several widely used vector models of a Gaussian laser beam in the context of more accurate vector diffraction integration. For the analysis, we present a streamlined derivation of the vector fields of a uniformly polarized beam reflected from an ideal parabolic mirror, both inside and outside of the resulting focus. This exact solution to Maxwell's equations, first developed in 1920 by V. S. Ignatovsky, is highly relevant to high-intensity laser experiments since the boundary conditions at a focusing optic dictate the form of the focus in a manner analogous to a physical experiment. In contrast, many models simply assume a field profile near the focus and develop the surrounding vector fields consistent with Maxwell's equations. In comparing the Ignatovsky result with popular closed-form analytic vector models of a Gaussian beam, we find that the relatively simple model developed by Erikson and Singh in 1994 provides good agreement in the paraxial limit. Models involving a Lax expansion introduce a divergences outside of the focus while providing little if any improvement in the focal region. Extremely tight focusing produces a somewhat complicated structure in the focus, and requires the Ignatovsky model for accurate representation.
NASA Astrophysics Data System (ADS)
Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen
2017-11-01
Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of ≈1.14 mm for the simulated PET detector and ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
NASA Astrophysics Data System (ADS)
Putra, MIH; Lewis, SA; Kurniasih, EM; Prabuning, D.; Faiqoh, E.
2016-11-01
Geographic information system and remote sensing techniques can be used to assist with distribution modelling; a useful tool that helps with strategic design and management plans for MPAs. This study built a pilot model of plankton biomass and distribution in the waters off Solor and Lembata, and is the first study to identify marine megafauna foraging areas in the region. Forty-three samples of zooplankton were collected every 4 km according to the range time and station of aqua MODIS. Generalized additive model (GAM) we used to modelling zooplankton biomass response from environmental properties.Thirty one samples were used to build a model of inverse distance weighting (IDW) (cell size 0.01°) and 12 samples were used as a control to verify the models accuracy. Furthermore, Getis-Ord Gi was used to identify the significance of the hotspot and cold-spot for foraging area. The GAM models was explain 88.1% response of zooplankton biomass and percent to full moon, phytopankton biomassbeing strong predictors. The sampling design was essential in order to build highly accurate models. Our models 96% accurate for phytoplankton and 88% accurate for zooplankton. The foraging behaviour was significantly related to plankton biomass hotspots, which were two times higher compared to plankton cold-spots. In addition, extremely steep slopes of the Lamakera strait support strong upwelling with highly productive waters that affect the presence of marine megafauna. This study detects that the Lamakera strait provides the planktonic requirements for marine megafauna foraging, helping to explain why this region supports such high diversity and abundance of marine megafauna.
Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling
NASA Technical Reports Server (NTRS)
Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.
2004-01-01
This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.
NASA Astrophysics Data System (ADS)
Mali, V. K.; Kuiry, S. N.
2015-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
NASA Astrophysics Data System (ADS)
Méndez Incera, F. J.; Erikson, L. H.; Ruggiero, P.; Barnard, P.; Camus, P.; Rueda Zamora, A. C.
2014-12-01
Comprehensive understanding of the river flow dynamics with varying topography in a real field is very intricate and difficult. Conventional experimental methods based on manual data collection are time consuming and prone to many errors. Recently, remotely sensed satellite imageries are at the best to provide necessary information for large area provided the high resolution but which are very expensive and untimely, consequently, attaining accurate river bathymetry from relatively course resolution and untimely imageries are inaccurate and impractical. Despite of that, these data are often being used to calibrate the river flow models, though these models require highly accurate morpho-dynamic data in order to predict the flow field precisely. Under this circumstance, these data could be supplemented through experimental observations in a physical model with modern techniques. This paper proposes a methodology to generate highly accurate river bathymetry and water surface (WS) profile for a physical model of river network system using CRP technique. For the task accomplishment, a number of DSLR Nikon D5300 cameras (mounted at 3.5 m above the river bed) were used to capture the images of the physical model and the flooding scenarios during the experiments. During experiment, non-specular materials were introduced at the inlet and images were taken simultaneously from different orientations and altitudes with significant overlap of 80%. Ground control points were surveyed using two ultrasonic sensors with ±0.5 mm vertical accuracy. The captured images are, then processed in PhotoScan software to generate the DEM and WS profile. The generated data were then passed through statistical analysis to identify errors. Accuracy of WS profile was limited by extent and density of non-specular powder and stereo-matching discrepancies. Furthermore, several factors of camera including orientation, illumination and altitude of camera. The CRP technique for a large scale physical model can significantly reduce the time and manual labour and avoids human errors in taking data using point gauge. Obtained highly accurate DEM and WS profile can be used in mathematical models for accurate prediction of river dynamics. This study would be very helpful for sediment transport study and can also be extended for real case studies.
Benchmark model correction of monitoring system based on Dynamic Load Test of Bridge
NASA Astrophysics Data System (ADS)
Shi, Jing-xian; Fan, Jiang
2018-03-01
Structural health monitoring (SHM) is a field of research in the area, and it’s designed to achieve bridge safety and reliability assessment, which needs to be carried out on the basis of the accurate simulation of the finite element model. Bridge finite element model is simplified of the structural section form, support conditions, material properties and boundary condition, which is based on the design and construction drawings, and it gets the calculation models and the results.But according to the design and specification requirements established finite element model due to its cannot fully reflect the true state of the bridge, so need to modify the finite element model to obtain the more accurate finite element model. Based on Da-guan river crossing of Ma - Zhao highway in Yunnan province as the background to do the dynamic load test test, we find that the impact coefficient of the theoretical model of the bridge is very different from the coefficient of the actual test, and the change is different; according to the actual situation, the calculation model is adjusted to get the correct frequency of the bridge, the revised impact coefficient found that the modified finite element model is closer to the real state, and provides the basis for the correction of the finite model.
Parametric model of human body shape and ligaments for patient-specific epidural simulation.
Vaughan, Neil; Dubey, Venketesh N; Wee, Michael Y K; Isaacs, Richard
2014-10-01
This work is to build upon the concept of matching a person's weight, height and age to their overall body shape to create an adjustable three-dimensional model. A versatile and accurate predictor of body size and shape and ligament thickness is required to improve simulation for medical procedures. A model which is adjustable for any size, shape, body mass, age or height would provide ability to simulate procedures on patients of various body compositions. Three methods are provided for estimating body circumferences and ligament thicknesses for each patient. The first method is using empirical relations from body shape and size. The second method is to load a dataset from a magnetic resonance imaging (MRI) scan or ultrasound scan containing accurate ligament measurements. The third method is a developed artificial neural network (ANN) which uses MRI dataset as a training set and improves accuracy using error back-propagation, which learns to increase accuracy as more patient data is added. The ANN is trained and tested with clinical data from 23,088 patients. The ANN can predict subscapular skinfold thickness within 3.54 mm, waist circumference 3.92 cm, thigh circumference 2.00 cm, arm circumference 1.21 cm, calf circumference 1.40 cm, triceps skinfold thickness 3.43 mm. Alternative regression analysis method gave overall slightly less accurate predictions for subscapular skinfold thickness within 3.75 mm, waist circumference 3.84 cm, thigh circumference 2.16 cm, arm circumference 1.34 cm, calf circumference 1.46 cm, triceps skinfold thickness 3.89 mm. These calculations are used to display a 3D graphics model of the patient's body shape using OpenGL and adjusted by 3D mesh deformations. A patient-specific epidural simulator is presented using the developed body shape model, able to simulate needle insertion procedures on a 3D model of any patient size and shape. The developed ANN gave the most accurate results for body shape, size and ligament thickness. The resulting simulator offers the experience of simulating needle insertions accurately whilst allowing for variation in patient body mass, height or age. Copyright © 2014 Elsevier B.V. All rights reserved.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Forecasting municipal solid waste generation using artificial intelligence modelling approaches.
Abbasi, Maryam; El Hanandeh, Ali
2016-10-01
Municipal solid waste (MSW) management is a major concern to local governments to protect human health, the environment and to preserve natural resources. The design and operation of an effective MSW management system requires accurate estimation of future waste generation quantities. The main objective of this study was to develop a model for accurate forecasting of MSW generation that helps waste related organizations to better design and operate effective MSW management systems. Four intelligent system algorithms including support vector machine (SVM), adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN) and k-nearest neighbours (kNN) were tested for their ability to predict monthly waste generation in the Logan City Council region in Queensland, Australia. Results showed artificial intelligence models have good prediction performance and could be successfully applied to establish municipal solid waste forecasting models. Using machine learning algorithms can reliably predict monthly MSW generation by training with waste generation time series. In addition, results suggest that ANFIS system produced the most accurate forecasts of the peaks while kNN was successful in predicting the monthly averages of waste quantities. Based on the results, the total annual MSW generated in Logan City will reach 9.4×10(7)kg by 2020 while the peak monthly waste will reach 9.37×10(6)kg. Copyright © 2016 Elsevier Ltd. All rights reserved.
Methods and benefits of experimental seismic evaluation of nuclear power plants. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-07-01
This study reviews experimental techniques, instrumentation requirements, safety considerations, and benefits of performing vibration tests on nuclear power plant containments and internal components. The emphasis is on testing to improve seismic structural models. Techniques for identification of resonant frequencies, damping, and mode shapes, are discussed. The benefits of testing with regard to increased damping and more accurate computer models are oulined. A test plan, schedule and budget are presented for a typical PWR nuclear power plant.
Characterization of Cloud Water-Content Distribution
NASA Technical Reports Server (NTRS)
Lee, Seungwon
2010-01-01
The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.
Properties of finite difference models of non-linear conservative oscillators
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1988-01-01
Finite-difference (FD) approaches to the numerical solution of the differential equations describing the motion of a nonlinear conservative oscillator are investigated analytically. A generalized formulation of the Duffing and modified Duffing equations is derived and analyzed using several FD techniques, and it is concluded that, although it is always possible to contstruct FD models of conservative oscillators which are themselves conservative, caution is required to avoid numerical solutions which do not accurately reflect the properties of the original equation.
Original data preprocessor for Femap/Nastran
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra
2016-12-01
Automatic data processing and visualization in the finite elements analysis of the structural problems is a long run concern in mechanical engineering. The paper presents the `common database' concept according to which the same information may be accessed from an analytical model, as well as from a numerical one. In this way, input data expressed as comma-separated-value (CSV) files are loaded into the Femap/Nastran environment using original API codes, being automatically generated: the geometry of the model, the loads and the constraints. The original API computer codes are general, being possible to generate the input data of any model. In the next stages, the user may create the discretization of the model, set the boundary conditions and perform a given analysis. If additional accuracy is needed, the analyst may delete the previous discretizations and using the same information automatically loaded, other discretizations and analyses may be done. Moreover, if new more accurate information regarding the loads or constraints is acquired, they may be modelled and then implemented in the data generating program which creates the `common database'. This means that new more accurate models may be easily generated. Other facility consists of the opportunity to control the CSV input files, several loading scenarios being possible to be generated in Femap/Nastran. In this way, using original intelligent API instruments the analyst is focused to accurately model the phenomena and on creative aspects, the repetitive and time-consuming activities being performed by the original computer-based instruments. Using this data processing technique we apply to the best Asimov's principle `minimum change required / maximum desired response'.
Davis, Brett; Van Wagtendonk, Jan W.; Beck, Jen; van Wagtendonk, Kent A.
2009-01-01
Surface fuels data are of critical importance for supporting fire incident management, risk assessment, and fuel management planning, but the development of surface fuels data can be expensive and time consuming. The data development process is extensive, generally beginning with acquisition of remotely sensed spatial data such as aerial photography or satellite imagery (Keane and others 2001). The spatial vegetation data are then crosswalked to a set of fire behavior fuel models that describe the available fuels (the burnable portions of the vegetation) (Anderson 1982, Scott and Burgan 2005). Finally, spatial fuels data are used as input to tools such as FARSITE and FlamMap to model current and potential fire spread and behavior (Finney 1998, Finney 2006). The capture date of the remotely sensed data defines the period for which the vegetation, and, therefore, fuels, data are most accurate. The more time that passes after the capture date, the less accurate the data become due to vegetation growth and processes such as fire. Subsequently, the results of any fire simulation based on these data become less accurate as the data age. Because of the amount of labor and expense required to develop these data, keeping them updated may prove to be a challenge. In this article, we describe the Sierra Nevada Fuel Succession Model, a modeling tool that can quickly and easily update surface fuel models with a minimum of additional input data. Although it was developed for use by Yosemite, Sequoia, and Kings Canyon National Parks, it is applicable to much of the central and southern Sierra Nevada. Furthermore, the methods used to develop the model have national applicability.
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin
2013-11-01
The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.
Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres
NASA Astrophysics Data System (ADS)
Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.
2016-12-01
Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Uncertainty propagation for statistical impact prediction of space debris
NASA Astrophysics Data System (ADS)
Hoogendoorn, R.; Mooij, E.; Geul, J.
2018-01-01
Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.
De-embedding technique for accurate modeling of compact 3D MMIC CPW transmission lines
NASA Astrophysics Data System (ADS)
Pohan, U. H.; KKyabaggu, P. B.; Sinulingga, E. P.
2018-02-01
Requirement for high-density and high-functionality microwave and millimeter-wave circuits have led to the innovative circuit architectures such as three-dimensional multilayer MMICs. The major advantage of the multilayer techniques is that one can employ passive and active components based on CPW technology. In this work, MMIC Coplanar Waveguide(CPW)components such as Transmission Line (TL) are modeled in their 3D layouts. Main characteristics of CPWTL suffered from the probe pads’ parasitic and resonant frequency effects have been studied. By understanding the parasitic effects, then the novel de-embedding technique are developed accurately in order to predict high frequency characteristics of the designed MMICs. The novel de-embedding technique has shown to be critical in reducing the probe pad parasitic significantly from the model. As results, high frequency characteristics of the designed MMICs have been presented with minimumparasitic effects of the probe pads. The de-embedding process optimises the determination of main characteristics of Compact 3D MMIC CPW transmission lines.
Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator
NASA Technical Reports Server (NTRS)
O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.
2014-01-01
NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.
Quantum Tunneling Affects Engine Performance.
Som, Sibendu; Liu, Wei; Zhou, Dingyu D Y; Magnotti, Gina M; Sivaramakrishnan, Raghu; Longman, Douglas E; Skodje, Rex T; Davis, Michael J
2013-06-20
We study the role of individual reaction rates on engine performance, with an emphasis on the contribution of quantum tunneling. It is demonstrated that the effect of quantum tunneling corrections for the reaction HO2 + HO2 = H2O2 + O2 can have a noticeable impact on the performance of a high-fidelity model of a compression-ignition (e.g., diesel) engine, and that an accurate prediction of ignition delay time for the engine model requires an accurate estimation of the tunneling correction for this reaction. The three-dimensional model includes detailed descriptions of the chemistry of a surrogate for a biodiesel fuel, as well as all the features of the engine, such as the liquid fuel spray and turbulence. This study is part of a larger investigation of how the features of the dynamics and potential energy surfaces of key reactions, as well as their reaction rate uncertainties, affect engine performance, and results in these directions are also presented here.
The effects of strain and stress state in hot forming of mg AZ31 sheet
NASA Astrophysics Data System (ADS)
Sherek, Paul A.; Carpenter, Alexander J.; Hector, Louis G.; Krajewski, Paul E.; Carter, Jon T.; Lasceski, Joshua; Taleff, Eric M.
Wrought magnesium alloys, such as AZ31 sheet, are of considerable interest for light-weighting of vehicle structural components. The poor room-temperature ductility of AZ31 sheet has been a hindrance to forming the complex part shapes necessary for practical applications. However, the outstanding formability of AZ31 sheet at elevated temperature provides an opportunity to overcome that problem. Complex demonstration components have already been produced at 450°C using gas-pressure forming. Accurate simulations of such hot, gas-pressure forming will be required for the design and optimization exercises necessary if this technology is to be implemented commercially. We report on experiments and simulations used to construct the accurate material constitutive models necessary for finite-element-method simulations. In particular, the effects of strain and stress state on plastic deformation of AZ31 sheet at 450°C are considered in material constitutive model development. Material models are validated against data from simple forming experiments.
A unified framework for mesh refinement in random and physical space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jing; Stinis, Panos
In recent work we have shown how an accurate reduced model can be utilized to perform mesh renement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for renement in both randommore » and physical space. In this manuscript we focus on the application to random space mesh renement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the effciency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.« less
Fyllas, Nikolaos M; Bentley, Lisa Patrick; Shenkin, Alexander; Asner, Gregory P; Atkin, Owen K; Díaz, Sandra; Enquist, Brian J; Farfan-Rios, William; Gloor, Emanuel; Guerrieri, Rossella; Huasco, Walter Huaraca; Ishida, Yoko; Martin, Roberta E; Meir, Patrick; Phillips, Oliver; Salinas, Norma; Silman, Miles; Weerasinghe, Lasantha K; Zaragoza-Castells, Joana; Malhi, Yadvinder
2017-06-01
One of the major challenges in ecology is to understand how ecosystems respond to changes in environmental conditions, and how taxonomic and functional diversity mediate these changes. In this study, we use a trait-spectra and individual-based model, to analyse variation in forest primary productivity along a 3.3 km elevation gradient in the Amazon-Andes. The model accurately predicted the magnitude and trends in forest productivity with elevation, with solar radiation and plant functional traits (leaf dry mass per area, leaf nitrogen and phosphorus concentration, and wood density) collectively accounting for productivity variation. Remarkably, explicit representation of temperature variation with elevation was not required to achieve accurate predictions of forest productivity, as trait variation driven by species turnover appears to capture the effect of temperature. Our semi-mechanistic model suggests that spatial variation in traits can potentially be used to estimate spatial variation in productivity at the landscape scale. © 2017 John Wiley & Sons Ltd/CNRS.
Radiative flux and forcing parameterization error in aerosol-free clear skies
Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...
2015-07-03
This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less
Calculation of protein-ligand binding affinities.
Gilson, Michael K; Zhou, Huan-Xiang
2007-01-01
Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.
NASA Astrophysics Data System (ADS)
Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David
2017-10-01
We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.
Active machine learning-driven experimentation to determine compound effects on protein patterns
Naik, Armaghan W; Kangas, Joshua D; Sullivan, Devin P; Murphy, Robert F
2016-01-01
High throughput screening determines the effects of many conditions on a given biological target. Currently, to estimate the effects of those conditions on other targets requires either strong modeling assumptions (e.g. similarities among targets) or separate screens. Ideally, data-driven experimentation could be used to learn accurate models for many conditions and targets without doing all possible experiments. We have previously described an active machine learning algorithm that can iteratively choose small sets of experiments to learn models of multiple effects. We now show that, with no prior knowledge and with liquid handling robotics and automated microscopy under its control, this learner accurately learned the effects of 48 chemical compounds on the subcellular localization of 48 proteins while performing only 29% of all possible experiments. The results represent the first practical demonstration of the utility of active learning-driven biological experimentation in which the set of possible phenotypes is unknown in advance. DOI: http://dx.doi.org/10.7554/eLife.10047.001 PMID:26840049
Stochasticity in staged models of epidemics: quantifying the dynamics of whooping cough
Black, Andrew J.; McKane, Alan J.
2010-01-01
Although many stochastic models can accurately capture the qualitative epidemic patterns of many childhood diseases, there is still considerable discussion concerning the basic mechanisms generating these patterns; much of this stems from the use of deterministic models to try to understand stochastic simulations. We argue that a systematic method of analysing models of the spread of childhood diseases is required in order to consistently separate out the effects of demographic stochasticity, external forcing and modelling choices. Such a technique is provided by formulating the models as master equations and using the van Kampen system-size expansion to provide analytical expressions for quantities of interest. We apply this method to the susceptible–exposed–infected–recovered (SEIR) model with distributed exposed and infectious periods and calculate the form that stochastic oscillations take on in terms of the model parameters. With the use of a suitable approximation, we apply the formalism to analyse a model of whooping cough which includes seasonal forcing. This allows us to more accurately interpret the results of simulations and to make a more quantitative assessment of the predictions of the model. We show that the observed dynamics are a result of a macroscopic limit cycle induced by the external forcing and resonant stochastic oscillations about this cycle. PMID:20164086
Growth and yield model application in tropical rain forest management
James Atta-Boateng; John W., Jr. Moser
2000-01-01
Analytical tools are needed to evaluate the impact of management policies on the sustainable use of rain forest. Optimal decisions concerning the level of management inputs require accurate predictions of output at all relevant input levels. Using growth data from 40 l-hectare permanent plots obtained from the semi-deciduous forest of Ghana, a system of 77 differential...
A. A. May; G. R. McMeeking; T. Lee; J. W. Taylor; J. S. Craven; I. Burling; A. P. Sullivan; S. Akagi; J. L. Collett; M. Flynn; H. Coe; S. P. Urbanski; J. H. Seinfeld; R. J. Yokelson; S. M. Kreidenweis
2014-01-01
Aerosol emissions from prescribed fires can affect air quality on regional scales. Accurate representation of these emissions in models requires information regarding the amount and composition of the emitted species. We measured a suite of submicron particulate matter species in young plumes emitted from prescribed fires (chaparral and montane ecosystems in California...
Finite element analyses of two dimensional, anisotropic heat transfer in wood
John F. Hunt; Hongmei Gu
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Inputting basic orthogonal properties of the wood material alone are not sufficient for accurate modeling because wood is a combination of porous fiber cells that are aligned and mis-...
Collaboration using roles. [in computer network security
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
Segregation of roles into alternative accounts is a model which provides not only the ability to collaborate but also enables accurate accounting of resources consumed by collaborative projects, protects the resources and objects of such a project, and does not introduce new security vulnerabilities. The implementation presented here does not require users to remember additional passwords and provides a very simple consistent interface.
A methodology for adaptable and robust ecosystem services assessment.
Villa, Ferdinando; Bagstad, Kenneth J; Voigt, Brian; Johnson, Gary W; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant "one model fits all" paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES--both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A methodology for adaptable and robust ecosystem services assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A Methodology for Adaptable and Robust Ecosystem Services Assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts. PMID:24625496
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Semiautomated model building for RNA crystallography using a directed rotameric approach.
Keating, Kevin S; Pyle, Anna Marie
2010-05-04
Structured RNA molecules play essential roles in a variety of cellular processes; however, crystallographic studies of such RNA molecules present a large number of challenges. One notable complication arises from the low resolutions typical of RNA crystallography, which results in electron density maps that are imprecise and difficult to interpret. This problem is exacerbated by the lack of computational tools for RNA modeling, as many of the techniques commonly used in protein crystallography have no equivalents for RNA structure. This leads to difficulty and errors in the model building process, particularly in modeling of the RNA backbone, which is highly error prone due to the large number of variable torsion angles per nucleotide. To address this, we have developed a method for accurately building the RNA backbone into maps of intermediate or low resolution. This method is semiautomated, as it requires a crystallographer to first locate phosphates and bases in the electron density map. After this initial trace of the molecule, however, an accurate backbone structure can be built without further user intervention. To accomplish this, backbone conformers are first predicted using RNA pseudotorsions and the base-phosphate perpendicular distance. Detailed backbone coordinates are then calculated to conform both to the predicted conformer and to the previously located phosphates and bases. This technique is shown to produce accurate backbone structure even when starting from imprecise phosphate and base coordinates. A program implementing this methodology is currently available, and a plugin for the Coot model building program is under development.
Indoor Modelling Benchmark for 3D Geometry Extraction
NASA Astrophysics Data System (ADS)
Thomson, C.; Boehm, J.
2014-06-01
A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.
Busing, Richard T.; Solomon, Allen M.
2004-01-01
Two forest dynamics simulators are compared along climatic gradients in the Pacific Northwest. The ZELIG and FORCLIM models are tested against forest survey data from western Oregon. Their ability to generate accurate patterns of forest basal area and species composition is evaluated for series of sites with contrasting climate. Projections from both models approximate the basal area and composition patterns for three sites along the elevation gradient at H.J. Andrews Experimental Forest in the western Cascade Range. The ZELIG model is somewhat more accurate than FORCLIM at the two low-elevation sites. Attempts to project forest composition along broader climatic gradients reveal limitations of ZELIG, however. For example, ZELIG is less accurate than FORCLIM at projecting the average composition of a west Cascades ecoregion selected for intensive analysis. Also, along a gradient consisting of several sites on an east to west transect at 44.1oN latitude, both the FORCLIM model and the actual data show strong changes in composition and total basal area, but the ZELIG model shows a limited response. ZELIG does not simulate the declines in forest basal area and the diminished dominance of mesic coniferous species east of the Cascade crest. We conclude that ZELIG is suitable for analyses of certain sites for which it has been calibrated. FORCLIM can be applied in analyses involving a range of climatic conditions without requiring calibration for specific sites.
Theoretical Clues to the Ultraviolet Diversity of Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Brown, Peter J.; Baron, E.; Milne, Peter; Roming, Peter W. A.; Wang, Lifan
2015-08-01
The effect of metallicity on the observed light of Type Ia supernovae (SNe Ia) could lead to systematic errors as the absolute magnitudes of local and distant SNe Ia are compared to measure luminosity distances and determine cosmological parameters. The UV light may be especially sensitive to metallicity, though different modeling methods disagree as to the magnitude, wavelength dependence, and even the sign of the effect. The outer density structure, 56Ni, and to a lesser degree asphericity, also impact the UV. We compute synthetic photometry of various metallicity-dependent models and compare to UV/optical photometry from the Swift Ultra-Violet/Optical Telescope. We find that the scatter in the mid-UV to near-UV colors is larger than predicted by changes in metallicity alone and is not consistent with reddening. We demonstrate that a recently employed method to determine relative abundances using UV spectra can be done using UVOT photometry, but we warn that accurate results require an accurate model of the cause of the variations. The abundance of UV photometry now available should provide constraints on models that typically rely on UV spectroscopy for constraining metallicity, density, and other parameters. Nevertheless, UV spectroscopy for a variety of supernova explosions is still needed to guide the creation of accurate models. A better understanding of the influences affecting the UV is important for using SNe Ia as cosmological probes, as the UV light may test whether SNe Ia are significantly affected by evolutionary effects.
Propeller Study. Part 2: the Design of Propellers for Minimum Noise
NASA Technical Reports Server (NTRS)
Ormsbee, A. I.; Woan, C. J.
1977-01-01
The design of propellers which are efficient and yet produce minimum noise requires accurate determinations of both the flow over the propeller. Topics discussed in relating aerodynamic propeller design and propeller acoustics include the necessary approximations and assumptions involved, the coordinate systems and their transformations, the geometry of the propeller blade, and the problem formulations including the induced velocity, required in the determination of mean lines of blade sections, and the optimization of propeller noise. The numerical formulation for the lifting-line model are given. Some applications and numerical results are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Brian K.; Lewis, Nikole K.; Showman, Adam P.
2012-06-01
We present a new model for Ellipsoidal Variations Induced by a Low-Mass Companion, the EVIL-MC model. We employ several approximations appropriate for planetary systems to substantially increase the computational efficiency of our model relative to more general ellipsoidal variation models and improve upon the accuracy of simpler models. This new approach gives us a unique ability to rapidly and accurately determine planetary system parameters. We use the EVIL-MC model to analyze Kepler Quarter 0-2 (Q0-2) observations of the HAT-P-7 system, an F-type star orbited by a {approx} Jupiter-mass companion. Our analysis corroborates previous estimates of the planet-star mass ratio qmore » = (1.10 {+-} 0.06) Multiplication-Sign 10{sup -3}, and we have revised the planet's dayside brightness temperature to 2680{sup +10}{sub -20} K. We also find a large difference between the day- and nightside planetary flux, with little nightside emission. Preliminary dynamical+radiative modeling of the atmosphere indicates that this result is qualitatively consistent with high altitude absorption of stellar heating. Similar analyses of Kepler and CoRoT photometry of other planets using EVIL-MC will play a key role in providing constraints on the properties of many extrasolar systems, especially given the limited resources for follow-up and characterization of these systems. However, as we highlight, there are important degeneracies between the contributions from ellipsoidal variations and planetary emission and reflection. Consequently, for many of the hottest and brightest Kepler and CoRoT planets, accurate estimates of the planetary emission and reflection, diagnostic of atmospheric heat budgets, will require accurate modeling of the photometric contribution from the stellar ellipsoidal variation.« less
Thermospheric density and satellite drag modeling
NASA Astrophysics Data System (ADS)
Mehta, Piyush Mukesh
The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and GRACE satellites. Moving toward accurate atmospheric models and absolute densities requires physics based models for CD. Closed-form solutions of CD have been developed and exist for a handful of simple geometries (flat plate, sphere, and cylinder). However, for complex geometries, the Direct Simulation Monte Carlo (DSMC) method is an important tool for developing CD models. DSMC is computationally intensive and real-time simulations for CD are not feasible. Therefore, parameterized models for CD are required. Modeling CD for an RSO requires knowledge of the gas-surface interaction (GSI) that defines the manner in which the atmospheric particles exchange momentum and energy with the surface. The momentum and energy exchange is further influenced by likely adsorption of atomic oxygen that may partially or completely cover the surface. An important parameter that characterizes the GSI is the energy accommodation coefficient, α. An innovative and state-of-the-art technique of developing parameterized drag coefficient models is presented and validated using the GRACE satellite. The effect of gas-surface interactions on physical drag coefficients is examined. An attempt to reveal the nature of gas-surface interactions at altitudes above 500 km is made using the STELLA satellite. A model that can accurately estimate CD has the potential to: (i) reduce the sources of uncertainty in the drag model, (ii) improve density estimates by resolving time-varying biases and moving toward absolute densities, and (iii) increase data sources for density estimation by allowing for the use of a wide range of RSOs as information sources. Results from this work have the potential to significantly improve the accuracy of conjunction analysis and SSA.
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required beacause of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied. PMID:25285917
Ravazzani, Giovanni; Ghilardi, Matteo; Mendlik, Thomas; Gobiet, Andreas; Corbari, Chiara; Mancini, Marco
2014-01-01
Assessing the future effects of climate change on water availability requires an understanding of how precipitation and evapotranspiration rates will respond to changes in atmospheric forcing. Use of simplified hydrological models is required because of lack of meteorological forcings with the high space and time resolutions required to model hydrological processes in mountains river basins, and the necessity of reducing the computational costs. The main objective of this study was to quantify the differences between a simplified hydrological model, which uses only precipitation and temperature to compute the hydrological balance when simulating the impact of climate change, and an enhanced version of the model, which solves the energy balance to compute the actual evapotranspiration. For the meteorological forcing of future scenario, at-site bias-corrected time series based on two regional climate models were used. A quantile-based error-correction approach was used to downscale the regional climate model simulations to a point scale and to reduce its error characteristics. The study shows that a simple temperature-based approach for computing the evapotranspiration is sufficiently accurate for performing hydrological impact investigations of climate change for the Alpine river basin which was studied.
Current State of the Art Historic Building Information Modelling
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2017-08-01
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
Feedback control by online learning an inverse model.
Waegeman, Tim; Wyffels, Francis; Schrauwen, Francis
2012-10-01
A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
NASA Technical Reports Server (NTRS)
Clarke, John-Paul
2004-01-01
MEANS, the MIT Extensible Air Network Simulation, was created in February of 2001, and has been developed with support from NASA Ames since August of 2001. MEANS is a simulation tool which is designed to maximize fidelity without requiring data of such a low level as to preclude easy examination of alternative scenarios. To this end, MEANS is structured in a modular fashion to allow more detailed components to be brought in when desired, and left out when they would only be an impediment. Traditionally, one of the difficulties with high-fidelity models is that they require a level of detail in their data that is difficult to obtain. For analysis of past scenarios, the required data may not have been collected, or may be considered proprietary and thus difficult for independent researchers to obtain. For hypothetical scenarios, generation of the data is sufficiently difficult to be a task in and of itself. Often, simulations designed by a researcher will model exactly one element of the problem well and in detail, while assuming away other parts of the problem which are not of interest or for which data is not available. While these models are useful for working with the task at hand, they are very often not applicable to future problems. The MEAN Simulation attempts to address these problems by using a modular design which provides components of varying fidelity for each aspect of the simulation. This allows for the most accurate model for which data is available to be used. It also provides for easy analysis of sensitivity to data accuracy. This can be particularly useful in the case where accurate data is available for some subset of the situations that are to be considered. Furthermore, the ability to use the same model while examining effects on different parts of a system reduces the time spent learning the simulation, and provides for easier comparisons between changes to different parts of the system.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value
NASA Astrophysics Data System (ADS)
Courtney, Jennifer; Lynch, Peter; Sweeney, Conor
2013-04-01
The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.
Learning-based Wind Estimation using Distant Soundings for Unguided Aerial Delivery
NASA Astrophysics Data System (ADS)
Plyler, M.; Cahoy, K.; Angermueller, K.; Chen, D.; Markuzon, N.
2016-12-01
Delivering unguided, parachuted payloads from aircraft requires accurate knowledge of the wind field inside an operational zone. Usually, a dropsonde released from the aircraft over the drop zone gives a more accurate wind estimate than a forecast. Mission objectives occasionally demand releasing the dropsonde away from the drop zone, but still require accuracy and precision. Barnes interpolation and many other assimilation methods do poorly when the forecast error is inconsistent in a forecast grid. A machine learning approach can better leverage non-linear relations between different weather patterns and thus provide a better wind estimate at the target drop zone when using data collected up to 100 km away. This study uses the 13 km resolution Rapid Refresh (RAP) dataset available through NOAA and subsamples to an area around Yuma, AZ and up to approximately 10km AMSL. RAP forecast grids are updated with simulated dropsondes taken from analysis (historical weather maps). We train models using different data mining and machine learning techniques, most notably boosted regression trees, that can accurately assimilate the distant dropsonde. The model takes a forecast grid and simulated remote dropsonde data as input and produces an estimate of the wind stick over the drop zone. Using ballistic winds as a defining metric, we show our data driven approach does better than Barnes interpolation under some conditions, most notably when the forecast error is different between the two locations, on test data previously unseen by the model. We study and evaluate the model's performance depending on the size, the time lag, the drop altitude, and the geographic location of the training set, and identify parameters most contributing to the accuracy of the wind estimation. This study demonstrates a new approach for assimilating remotely released dropsondes, based on boosted regression trees, and shows improvement in wind estimation over currently used methods.
Foot placement relies on state estimation during visually guided walking.
Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S
2017-02-01
As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.
Lee, Yong Ju; Jung, Byeong Su; Kim, Kee-Tae; Paik, Hyun-Dong
2015-09-01
A predictive model was performed to describe the growth of Staphylococcus aureus in raw pork by using Integrated Pathogen Modeling Program 2013 and a polynomial model as a secondary predictive model. S. aureus requires approximately 180 h to reach 5-6 log CFU/g at 10 °C. At 15 °C and 25 °C, approximately 48 and 20 h, respectively, are required to cause food poisoning. Predicted data using the Gompertz model was the most accurate in this study. For lag time (LT) model, bias factor (Bf) and accuracy factor (Af) values were both 1.014, showing that the predictions were within a reliable range. For specific growth rate (SGR) model, Bf and Af were 1.188 and 1.190, respectively. Additionally, both Bf and Af values of the LT and SGR models were close to 1, indicating that IPMP Gompertz model is more adequate for predicting the growth of S. aureus on raw pork than other models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gaoua, Nadia; de Oliveira, Rita F; Hunter, Steve
2017-01-01
Different professional domains require high levels of physical performance alongside fast and accurate decision-making. Construction workers, police officers, firefighters, elite sports men and women, the military and emergency medical professionals are often exposed to hostile environments with limited options for behavioral coping strategies. In this (mini) review we use football refereeing as an example to discuss the combined effect of intense physical activity and extreme temperatures on decision-making and suggest an explicative model. In professional football competitions can be played in temperatures ranging from -5°C in Norway to 30°C in Spain for example. Despite these conditions, the referee's responsibility is to consistently apply the laws fairly and uniformly, and to ensure the rules are followed without waning or adversely influencing the competitiveness of the play. However, strenuous exercise in extreme environments imposes increased physiological and psychological stress that can affect decision-making. Therefore, the physical exertion required to follow the game and the thermal strain from the extreme temperatures may hinder the ability of referees to make fast and accurate decisions. Here, we review literature on the physical and cognitive requirements of football refereeing and how extreme temperatures may affect referees' decisions. Research suggests that both hot and cold environments have a negative impact on decision-making but data specific to decision-making is still lacking. A theoretical model of decision-making under the constraint of intense physical activity and thermal stress is suggested. Future naturalistic studies are needed to validate this model and provide clear recommendations for mitigating strategies.
Gaoua, Nadia; de Oliveira, Rita F.; Hunter, Steve
2017-01-01
Different professional domains require high levels of physical performance alongside fast and accurate decision-making. Construction workers, police officers, firefighters, elite sports men and women, the military and emergency medical professionals are often exposed to hostile environments with limited options for behavioral coping strategies. In this (mini) review we use football refereeing as an example to discuss the combined effect of intense physical activity and extreme temperatures on decision-making and suggest an explicative model. In professional football competitions can be played in temperatures ranging from -5°C in Norway to 30°C in Spain for example. Despite these conditions, the referee’s responsibility is to consistently apply the laws fairly and uniformly, and to ensure the rules are followed without waning or adversely influencing the competitiveness of the play. However, strenuous exercise in extreme environments imposes increased physiological and psychological stress that can affect decision-making. Therefore, the physical exertion required to follow the game and the thermal strain from the extreme temperatures may hinder the ability of referees to make fast and accurate decisions. Here, we review literature on the physical and cognitive requirements of football refereeing and how extreme temperatures may affect referees’ decisions. Research suggests that both hot and cold environments have a negative impact on decision-making but data specific to decision-making is still lacking. A theoretical model of decision-making under the constraint of intense physical activity and thermal stress is suggested. Future naturalistic studies are needed to validate this model and provide clear recommendations for mitigating strategies. PMID:28912742
Anchoring Atmospheric Density Models Using Observed Shuttle Plume Emissions
NASA Astrophysics Data System (ADS)
Dimpfl, W. L.; Bernstien, L. S.
2010-12-01
Atmospheric number densities at a given low-earth orbit (LEO) altitude can vary by more than an order of magnitude, depending on such parameters as diurnal variations and solar activity. The MSIS atmospheric model, which includes these dependent variables as input, is reported as being accurate to ±15%. Improvement to such models requires accurate direct atmospheric measurement. Here, a means of anchoring atmospheric models is offered through measuring the size and shape of atomic line or molecular band radiance resulting from the atmospheric interaction from rocket engine plumes or gas releases in LEO. Many discrete line or band emissions, ranging from the infrared to the ultraviolet may be suitable. For this purpose we are focusing on NH(A→X), centered at 316 nm. This emission is seen in the plumes of the Shuttle Orbiter PRCS engines, is expected in the plume of any amine fueled engine, and can be observed from remote sensors in space or on the ground. The atmospheric interaction of gas releases or plumes from spacecraft in LEO are understood by comparison of observed radiance with that predicted by Direct Simulation Monte Carlo (DSMC) models. The recent Extended Variable Hard Sphere (EVHS) improvements in treating hyperthermal collisions has produced exceptional agreement between measured and modeled steady-state Space Shuttle OMS and PRCS 190-250 nm Cameron band plume radiance from CO(a→X), which is understood to result from a combination of two- and three-step mechanisms. Radiance from NH(A→X) in far field plumes is understood to result from a simpler single-step process of the reaction of a minor plume species with atomic oxygen, making it more suitable for use in determining atmospheric density. It is recommended that direct retrofire burns of amine fueled engines be imaged in a narrow band from remote sensors to reveal atmospheric number density. In principal the simple measurement of the distance between the engine exit and the peak in the steady-state radiance from LEO spacecraft can indicate atmospheric density to ~1% accuracy. Use of this radiance requires calibration by an accurate independent measurement associated with a well-resolved steady-state image of it.