Evaluation of RCAS Inflow Models for Wind Turbine Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tangler, J.; Bir, G.
The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.
On Two-Dimensional ARMA Models for Image Analysis.
1980-03-24
2-D ARMA models for image analysis . Particular emphasis is placed on restoration of noisy images using 2-D ARMA models. Computer results are...is concluded that the models are very effective linear models for image analysis . (Author)
TRAC-PD2 posttest analysis of the CCTF Evaluation-Model Test C1-19 (Run 38). [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motley, F.
The results of a Transient Reactor Analysis Code posttest analysis of the Cylindral Core Test Facility Evaluation-Model Test agree very well with the results of the experiment. The good agreement obtained verifies the multidimensional analysis capability of the TRAC code. Because of the steep radial power profile, the importance of using fine noding in the core region was demonstrated (as compared with poorer results obtained from an earlier pretest prediction that used a coarsely noded model).
ModelMate - A graphical user interface for model analysis
Banta, Edward R.
2011-01-01
ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.
Application of Interface Technology in Progressive Failure Analysis of Composite Panels
NASA Technical Reports Server (NTRS)
Sleight, D. W.; Lotts, C. G.
2002-01-01
A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Regression and multivariate models for predicting particulate matter concentration level.
Nazif, Amina; Mohammed, Nurul Izma; Malakahmad, Amirhossein; Abualqumboz, Motasem S
2018-01-01
The devastating health effects of particulate matter (PM 10 ) exposure by susceptible populace has made it necessary to evaluate PM 10 pollution. Meteorological parameters and seasonal variation increases PM 10 concentration levels, especially in areas that have multiple anthropogenic activities. Hence, stepwise regression (SR), multiple linear regression (MLR) and principal component regression (PCR) analyses were used to analyse daily average PM 10 concentration levels. The analyses were carried out using daily average PM 10 concentration, temperature, humidity, wind speed and wind direction data from 2006 to 2010. The data was from an industrial air quality monitoring station in Malaysia. The SR analysis established that meteorological parameters had less influence on PM 10 concentration levels having coefficient of determination (R 2 ) result from 23 to 29% based on seasoned and unseasoned analysis. While, the result of the prediction analysis showed that PCR models had a better R 2 result than MLR methods. The results for the analyses based on both seasoned and unseasoned data established that MLR models had R 2 result from 0.50 to 0.60. While, PCR models had R 2 result from 0.66 to 0.89. In addition, the validation analysis using 2016 data also recognised that the PCR model outperformed the MLR model, with the PCR model for the seasoned analysis having the best result. These analyses will aid in achieving sustainable air quality management strategies.
COBRA ATD minefield detection model initial performance analysis
NASA Astrophysics Data System (ADS)
Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.
2000-08-01
A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.
Dynamic test/analysis correlation using reduced analytical models
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Angelucci, A. Filippo; Javeed, Mehzad
1992-01-01
Test/analysis correlation is an important aspect of the verification of analysis models which are used to predict on-orbit response characteristics of large space structures. This paper presents results of a study using reduced analysis models for performing dynamic test/analysis correlation. The reduced test-analysis model (TAM) has the same number and orientation of DOF as the test measurements. Two reduction methods, static (Guyan) reduction and the Improved Reduced System (IRS) reduction, are applied to the test/analysis correlation of a laboratory truss structure. Simulated test results and modal test data are used to examine the performance of each method. It is shown that selection of DOF to be retained in the TAM is critical when large structural masses are involved. In addition, the use of modal test results may provide difficulties in TAM accuracy even if a large number of DOF are retained in the TAM.
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
2009-06-01
simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE
Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.
ASTROP2 users manual: A program for aeroelastic stability analysis of propfans
NASA Technical Reports Server (NTRS)
Narayanan, G. V.; Kaza, K. R. V.
1991-01-01
A user's manual is presented for the aeroelastic stability and response of propulsion systems computer program called ASTROP2. The ASTROP2 code preforms aeroelastic stability analysis of rotating propfan blades. This analysis uses a two-dimensional, unsteady cascade aerodynamics model and a three-dimensional, normal-mode structural model. Analytical stability results from this code are compared with published experimental results of a rotating composite advanced turboprop model and of nonrotating metallic wing model.
Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results
NASA Technical Reports Server (NTRS)
Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul
1992-01-01
The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.
Educational and Scientific Applications of Climate Model Diagnostic Analyzer
NASA Astrophysics Data System (ADS)
Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.
2016-12-01
Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of Two Variables, and the datasets used are NCAR CAM total cloud fraction and MODIS total cloud fraction. The scientific highlight of the use case is that the CAM5 model overall does a fairly decent job at simulating total cloud cover, though simulates too few clouds especially near and offshore of the eastern ocean basins where low clouds are dominant.
Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun
2007-09-01
Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.
Model prototype utilization in the analysis of fault tolerant control and data processing systems
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Tsarev, R. Yu; Gruzenkin, D. V.; Prokopenko, A. V.; Knyazkov, A. N.; Laptenok, V. D.
2016-04-01
The procedure assessing the profit of control and data processing system implementation is presented in the paper. The reasonability of model prototype creation and analysis results from the implementing of the approach of fault tolerance provision through the inclusion of structural and software assessment redundancy. The developed procedure allows finding the best ratio between the development cost and the analysis of model prototype and earnings from the results of this utilization and information produced. The suggested approach has been illustrated by the model example of profit assessment and analysis of control and data processing system.
Performance analysis and dynamic modeling of a single-spool turbojet engine
NASA Astrophysics Data System (ADS)
Andrei, Irina-Carmen; Toader, Adrian; Stroe, Gabriela; Frunzulica, Florin
2017-01-01
The purposes of modeling and simulation of a turbojet engine are the steady state analysis and transient analysis. From the steady state analysis, which consists in the investigation of the operating, equilibrium regimes and it is based on appropriate modeling describing the operation of a turbojet engine at design and off-design regimes, results the performance analysis, concluded by the engine's operational maps (i.e. the altitude map, velocity map and speed map) and the engine's universal map. The mathematical model that allows the calculation of the design and off-design performances, in case of a single spool turbojet is detailed. An in house code was developed, its calibration was done for the J85 turbojet engine as the test case. The dynamic modeling of the turbojet engine is obtained from the energy balance equations for compressor, combustor and turbine, as the engine's main parts. The transient analysis, which is based on appropriate modeling of engine and its main parts, expresses the dynamic behavior of the turbojet engine, and further, provides details regarding the engine's control. The aim of the dynamic analysis is to determine a control program for the turbojet, based on the results provided by performance analysis. In case of the single-spool turbojet engine, with fixed nozzle geometry, the thrust is controlled by one parameter, which is the fuel flow rate. The design and management of the aircraft engine controls are based on the results of the transient analysis. The construction of the design model is complex, since it is based on both steady-state and transient analysis, further allowing the flight path cycle analysis and optimizations. This paper presents numerical simulations for a single-spool turbojet engine (J85 as test case), with appropriate modeling for steady-state and dynamic analysis.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
DOT National Transportation Integrated Search
2014-04-01
This Analysis Brief documents the methodology and results from the Compliance Review Effectiveness Model (CREM) for carriers receiving CRs in fiscal year (FY) 2009. The model measures the effectiveness of the compliance review (CR) program, one of th...
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Noise Reduction Design of the Volute for a Centrifugal Compressor
NASA Astrophysics Data System (ADS)
Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong
2017-08-01
In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Multi-Disciplinary, Multi-Fidelity Discrete Data Transfer Using Degenerate Geometry Forms
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
In a typical multi-fidelity design process, different levels of geometric abstraction are used for different analysis methods, and transitioning from one phase of design to the next often requires a complete re-creation of the geometry. To maintain consistency between lower-order and higher-order analysis results, Vehicle Sketch Pad (OpenVSP) recently introduced the ability to generate and export several degenerate forms of the geometry, representing the type of abstraction required to perform low- to medium-order analysis for a range of aeronautical disciplines. In this research, the functionality of these degenerate models was extended, so that in addition to serving as repositories for the geometric information that is required as input to an analysis, the degenerate models can also store the results of that analysis mapped back onto the geometric nodes. At the same time, the results are also mapped indirectly onto the nodes of lower-order degenerate models using a process called aggregation, and onto higher-order models using a process called disaggregation. The mapped analysis results are available for use by any subsequent analysis in an integrated design and analysis process. A simple multi-fidelity analysis process for a single-aisle subsonic transport aircraft is used as an example case to demonstrate the value of the approach.
Design, analysis and verification of a knee joint oncological prosthesis finite element model.
Zach, Lukáš; Kunčická, Lenka; Růžička, Pavel; Kocich, Radim
2014-11-01
The aim of this paper was to design a finite element model for a hinged PROSPON oncological knee endoprosthesis and to verify the model by comparison with ankle flexion angle using knee-bending experimental data obtained previously. Visible Human Project CT scans were used to create a general lower extremity bones model and to compose a 3D CAD knee joint model to which muscles and ligaments were added. Into the assembly the designed finite element PROSPON prosthesis model was integrated and an analysis focused on the PEEK-OPTIMA hinge pin bushing stress state was carried out. To confirm the stress state analysis results, contact pressure was investigated. The analysis was performed in the knee-bending position within 15.4-69.4° hip joint flexion range. The results showed that the maximum stress achieved during the analysis (46.6 MPa) did not exceed the yield strength of the material (90 MPa); the condition of plastic stability was therefore met. The stress state analysis results were confirmed by the distribution of contact pressure during knee-bending. The applicability of our designed finite element model for the real implant behaviour prediction was proven on the basis of good correlation of the analytical and experimental ankle flexion angle data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Elrod, David Alan
1988-01-01
The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.
Uncertainty analysis of hydrological modeling in a tropical area using different algorithms
NASA Astrophysics Data System (ADS)
Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh
2018-01-01
Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Multiscale Modeling for the Analysis for Grain-Scale Fracture Within Aluminum Microstructures
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Phillips, Dawn R.; Yamakov, Vesselin; Saether, Erik
2005-01-01
Multiscale modeling methods for the analysis of metallic microstructures are discussed. Both molecular dynamics and the finite element method are used to analyze crack propagation and stress distribution in a nanoscale aluminum bicrystal model subjected to hydrostatic loading. Quantitative similarity is observed between the results from the two very different analysis methods. A bilinear traction-displacement relationship that may be embedded into cohesive zone finite elements is extracted from the nanoscale molecular dynamics results.
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Hilburger, Mark W.
2013-01-01
This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-04-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
NASA Astrophysics Data System (ADS)
Riccio, A.; Giunta, G.; Galmarini, S.
2007-12-01
In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.
Displacement-based back-analysis of the model parameters of the Nuozhadu high earth-rockfill dam.
Wu, Yongkang; Yuan, Huina; Zhang, Bingyin; Zhang, Zongliang; Yu, Yuzhen
2014-01-01
The parameters of the constitutive model, the creep model, and the wetting model of materials of the Nuozhadu high earth-rockfill dam were back-analyzed together based on field monitoring displacement data by employing an intelligent back-analysis method. In this method, an artificial neural network is used as a substitute for time-consuming finite element analysis, and an evolutionary algorithm is applied for both network training and parameter optimization. To avoid simultaneous back-analysis of many parameters, the model parameters of the three main dam materials are decoupled and back-analyzed separately in a particular order. Displacement back-analyses were performed at different stages of the construction period, with and without considering the creep and wetting deformations. Good agreement between the numerical results and the monitoring data was obtained for most observation points, which implies that the back-analysis method and decoupling method are effective for solving complex problems with multiple models and parameters. The comparison of calculation results based on different sets of back-analyzed model parameters indicates the necessity of taking the effects of creep and wetting into consideration in the numerical analyses of high earth-rockfill dams. With the resulting model parameters, the stress and deformation distributions at completion are predicted and analyzed.
Integrated corridor management analysis, modeling, and simulation results for the test corridor.
DOT National Transportation Integrated Search
2008-06-01
This report documents the Integrated Corridor Management (ICM) Analysis Modeling and Simulation (AMS) tools and strategies used on a Test Corridor, presents results and lessons-learned, and documents the relative capability of AMS to support benefit-...
Posttest RELAP4 analysis of LOFT experiment L1-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grush, W.H.; Holmstrom, H.L.O.
Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Software Safety Analysis of a Flight Guidance System
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S
2013-01-01
Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762
Penetration analysis of projectile with inclined concrete target
NASA Astrophysics Data System (ADS)
Kim, S. B.; Kim, H. W.; Yoo, Y. H.
2015-09-01
This paper presents numerical analysis result of projectile penetration with concrete target. We applied dynamic material properties of 4340 steels, aluminium and explosive for projectile body. Dynamic material properties were measured with static tensile testing machine and Hopkinson pressure bar tests. Moreover, we used three concrete damage models included in LS-DYNA 3D, such as SOIL_CONCRETE, CSCM (cap model with smooth interaction) and CONCRETE_DAMAGE (K&C concrete) models. Strain rate effect for concrete material is important to predict the fracture deformation and shape of concrete, and penetration depth for projectiles. CONCRETE_DAMAGE model with strain rate effect also applied to penetration analysis. Analysis result with CSCM model shows good agreement with penetration experimental data. The projectile trace and fracture shapes of concrete target were compared with experimental data.
Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar
2013-06-24
A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.
PESTAN: Pesticide Analytical Model Version 4.0 User's Guide
The principal objective of this User's Guide to provide essential information on the aspects such as model conceptualization, model theory, assumptions and limitations, determination of input parameters, analysis of results and sensitivity analysis.
Analysis on the crime model using dynamical approach
NASA Astrophysics Data System (ADS)
Mohammad, Fazliza; Roslan, Ummu'Atiqah Mohd
2017-08-01
A research is carried out to analyze a dynamical model of the spread crime system. A Simplified 2-Dimensional Model is used in this research. The objectives of this research are to investigate the stability of the model of the spread crime, to summarize the stability by using a bifurcation analysis and to study the relationship of basic reproduction number, R0 with the parameter in the model. Our results for stability of equilibrium points shows that we have two types of stability, which are asymptotically stable and saddle node. While the result for bifurcation analysis shows that the number of criminally active and incarcerated increases as we increase the value of a parameter in the model. The result for the relationship of R0 with the parameter shows that as the parameter increases, R0 increase too, and the rate of crime increase too.
NASA Technical Reports Server (NTRS)
Hodges, Robert V.; Nixon, Mark W.; Rehfield, Lawrence W.
1987-01-01
A methodology was developed for the structural analysis of composite rotor blades. This coupled-beam analysis is relatively simple to use compared with alternative analysis techniques. The beam analysis was developed for thin-wall single-cell rotor structures and includes the effects of elastic coupling. This paper demonstrates the effectiveness of the new composite-beam analysis method through comparison of its results with those of an established baseline analysis technique. The baseline analysis is an MSC/NASTRAN finite-element model built up from anisotropic shell elements. Deformations are compared for three linear static load cases of centrifugal force at design rotor speed, applied torque, and lift for an ideal rotor in hover. A D-spar designed to twist under axial loading is the subject of the analysis. Results indicate the coupled-beam analysis is well within engineering accuracy.
A brain-region-based meta-analysis method utilizing the Apriori algorithm.
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-05-18
Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.
Analysis of modeling cumulative noise from simultaneous flights volume 2 : supplemental analysis
DOT National Transportation Integrated Search
2012-12-31
This is the second of two volumes of the report on modeling cumulative noise from simultaneous flights. This volume examines the effect of several modeling input cases on Percent Time Audible results calculated by the Integrated Noise Model. The case...
Kumar, Y Kiran; Mehta, Shashi Bhushan; Ramachandra, Manjunath
2017-01-01
The purpose of this work is to provide some validation methods for evaluating the hemodynamic assessment of Cerebral Arteriovenous Malformation (CAVM). This article emphasizes the importance of validating noninvasive measurements for CAVM patients, which are designed using lumped models for complex vessel structure. The validation of the hemodynamics assessment is based on invasive clinical measurements and cross-validation techniques with the Philips proprietary validated software's Qflow and 2D Perfursion. The modeling results are validated for 30 CAVM patients for 150 vessel locations. Mean flow, diameter, and pressure were compared between modeling results and with clinical/cross validation measurements, using an independent two-tailed Student t test. Exponential regression analysis was used to assess the relationship between blood flow, vessel diameter, and pressure between them. Univariate analysis is used to assess the relationship between vessel diameter, vessel cross-sectional area, AVM volume, AVM pressure, and AVM flow results were performed with linear or exponential regression. Modeling results were compared with clinical measurements from vessel locations of cerebral regions. Also, the model is cross validated with Philips proprietary validated software's Qflow and 2D Perfursion. Our results shows that modeling results and clinical results are nearly matching with a small deviation. In this article, we have validated our modeling results with clinical measurements. The new approach for cross-validation is proposed by demonstrating the accuracy of our results with a validated product in a clinical environment.
Comprehensive Analysis Modeling of Small-Scale UAS Rotors
NASA Technical Reports Server (NTRS)
Russell, Carl R.; Sekula, Martin K.
2017-01-01
Multicopter unmanned aircraft systems (UAS), or drones, have continued their explosive growth in recent years. With this growth comes demand for increased performance as the limits of existing technologies are reached. In order to better design multicopter UAS aircraft, better performance prediction tools are needed. This paper presents the results of a study aimed at using the rotorcraft comprehensive analysis code CAMRAD II to model a multicopter UAS rotor in hover. Parametric studies were performed to determine the level of fidelity needed in the analysis code inputs to achieve results that match test data. Overall, the results show that CAMRAD II is well suited to model small-scale UAS rotors in hover. This paper presents the results of the parametric studies as well as recommendations for the application of comprehensive analysis codes to multicopter UAS rotors.
Psychometric Properties on Lecturers' Beliefs on Teaching Function: Rasch Model Analysis
ERIC Educational Resources Information Center
Mofreh, Samah Ali Mohsen; Ghafar, Mohammed Najib Abdul; Omar, Abdul Hafiz Hj; Mosaku, Monsurat; Ma'ruf, Amar
2014-01-01
This paper focuses on the psychometric analysis of lecturers' beliefs on teaching function (LBTF) survey using Rasch Model analysis. The sample comprised 34 Community Colleges' lecturers. The Rasch Model is applied to produce specific measurements on the lecturers' beliefs on teaching function in order to generalize results and inferential…
NASA Astrophysics Data System (ADS)
Agapov, Vladimir
2018-03-01
The necessity of new approaches to the modeling of rods in the analysis of high-rise constructions is justified. The possibility of the application of the three-dimensional superelements of rods with rectangular cross section for the static and dynamic calculation of the bar and combined structures is considered. The results of the eighteen-story spatial frame free vibrations analysis using both one-dimensional and three-dimensional models of rods are presented. A comparative analysis of the obtained results is carried out and the conclusions on the possibility of three-dimensional superelements application in static and dynamic analysis of high-rise constructions are given on its basis.
Neutronics Conversion Analyses of the Laue-Langevin Institute (ILL) High Flux Reactor (RHF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergeron, A.; Dionne, B.; Calzavara, Y.
2014-09-30
The following report describes the neutronics results obtained with the MCNP model of the RHF U7Mo LEU reference design that has been established in 2010 during the feasibility analysis. This work constitutes a complete and detailed neutronics analysis of that LEU design using models that have been significantly improved since 2010 and the release of the feasibility report. When possible, the credibility of the neutronics model is tested by comparing the HEU model results with experimental data or other codes calculations results. The results obtained with the LEU model are systematically compared to the HEU model. The changes applied tomore » the neutronics model lead to better comparisons with experimental data or improved the calculation efficiency but do not challenge the conclusion of the feasibility analysis. If the U7Mo fuel is commercially available, not cost prohibitive, a back-end solution is established and if it is possible to manufacture the proposed element, neutronics analyses show that the performance of the reactor would not be challenged by the conversion to LEU fuel.« less
Impact of uncertainty on modeling and testing
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Brown, Kendall K.
1995-01-01
A thorough understanding of the uncertainties associated with the modeling and testing of the Space Shuttle Main Engine (SSME) Engine will greatly aid decisions concerning hardware performance and future development efforts. This report will describe the determination of the uncertainties in the modeling and testing of the Space Shuttle Main Engine test program at the Technology Test Bed facility at Marshall Space Flight Center. Section 2 will present a summary of the uncertainty analysis methodology used and discuss the specific applications to the TTB SSME test program. Section 3 will discuss the application of the uncertainty analysis to the test program and the results obtained. Section 4 presents the results of the analysis of the SSME modeling effort from an uncertainty analysis point of view. The appendices at the end of the report contain a significant amount of information relative to the analysis, including discussions of venturi flowmeter data reduction and uncertainty propagation, bias uncertainty documentations, technical papers published, the computer code generated to determine the venturi uncertainties, and the venturi data and results used in the analysis.
Compression After Impact on Honeycomb Core Sandwich Panels with Thin Facesheets, Part 2: Analysis
NASA Technical Reports Server (NTRS)
Mcquigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.
2012-01-01
A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part 2, the subject of the current paper, is focused on the analysis, which corresponds to the CAI testings described in Part 1. Of interest, are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of materials, which were identical with the exception of the density of the honeycomb core, were tested in Part 1. The results highlighted the need for analysis methods which taken into account multiple failure modes. A finite element model (FEM) is developed here, in Part 2. A commercial implementation of the Multicontinuum Failure Theory (MCT) for progressive failure analysis (PFA) in composite laminates, Helius:MCT, is included in this model. The inclusion of PFA in the present model provided a new, unique ability to account for multiple failure modes. In addition, significant impact damage detail is included in the model. A sensitivity study, used to assess the effect of each damage parameter on overall analysis results, is included in an appendix. Analysis results are compared to the experimental results for each of the 32 CAI sandwich panel specimens tested to failure. The failure of each specimen is predicted using the high-fidelity, physicsbased analysis model developed here, and the results highlight key improvements in the understanding of honeycomb core sandwich panel CAI failure. Finally, a parametric study highlights the strength benefits compared to mass penalty for various core densities.
Dynamical System Analysis of Reynolds Stress Closure Equations
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.
1997-01-01
In this paper, we establish the causality between the model coefficients in the standard pressure-strain correlation model and the predicted equilibrium states for homogeneous turbulence. We accomplish this by performing a comprehensive fixed point analysis of the modeled Reynolds stress and dissipation rate equations. The results from this analysis will be very useful for developing improved pressure-strain correlation models to yield observed equilibrium behavior.
NASA Astrophysics Data System (ADS)
Glocer, A.; Rastätter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.; Weigel, R. S.; McCollough, J.; Wing, S.
2016-07-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPC's effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
NASA Technical Reports Server (NTRS)
Glocer, A.; Rastaetter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.;
2016-01-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPCs effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
An introduction to Space Weather Integrated Modeling
NASA Astrophysics Data System (ADS)
Zhong, D.; Feng, X.
2012-12-01
The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.
NASA Astrophysics Data System (ADS)
O'Donncha, Fearghal; Hartnett, Michael; Nash, Stephen; Ren, Lei; Ragnoli, Emanuele
2015-02-01
In this study, High Frequency Radar (HFR), observations in conjunction with numerical model simulations investigate surface flow dynamics in a tidally-active, wind-driven bay; Galway Bay situated on the West coast of Ireland. Comparisons against ADCP sensor data permit an independent assessment of HFR and model performance, respectively. Results show root-mean-square (rms) differences in the range 10 - 12cm/s while model rms equalled 12 - 14cm/s. Subsequent analysis focus on a detailed comparison of HFR and model output. Harmonic analysis decompose both sets of surface currents based on distinct flow process, enabling a correlation analysis between the resultant output and dominant forcing parameters. Comparisons of barotropic model simulations and HFR tidal signal demonstrate consistently high agreement, particularly of the dominant M2 tidal signal. Analysis of residual flows demonstrate considerably poorer agreement, with the model failing to replicate complex flows. A number of hypotheses explaining this discrepancy are discussed, namely: discrepancies between regional-scale, coastal-ocean models and globally-influenced bay-scale dynamics; model uncertainties arising from highly-variable wind-driven flows across alarge body of water forced by point measurements of wind vectors; and the high dependence of model simulations on empirical wind-stress coefficients. The research demonstrates that an advanced, widely-used hydro-environmental model does not accurately reproduce aspects of surface flow processes, particularly with regards wind forcing. Considering the significance of surface boundary conditions in both coastal and open ocean dynamics, the viability of using a systematic analysis of results to improve model predictions is discussed.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
Modified optimal control pilot model for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1992-01-01
This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.
FAST Mast Structural Response to Axial Loading: Modeling and Verification
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Elliott, Kenny B.; Templeton, Justin D.; Song, Kyongchan; Rayburn, Jeffery T.
2012-01-01
The International Space Station s solar array wing mast shadowing problem is the focus of this paper. A building-block approach to modeling and analysis is pursued for the primary structural components of the solar array wing mast structure. Starting with an ANSYS (Registered Trademark) finite element model, a verified MSC.Nastran (Trademark) model is established for a single longeron. This finite element model translation requires the conversion of several modeling and analysis features for the two structural analysis tools to produce comparable results for the single-longeron configuration. The model is then reconciled using test data. The resulting MSC.Nastran (Trademark) model is then extended to a single-bay configuration and verified using single-bay test data. Conversion of the MSC. Nastran (Trademark) single-bay model to Abaqus (Trademark) is also performed to simulate the elastic-plastic longeron buckling response of the single bay prior to folding.
Pilot-model analysis and simulation study of effect of control task desired control response
NASA Technical Reports Server (NTRS)
Adams, J. J.; Gera, J.; Jaudon, J. B.
1978-01-01
A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Parametric sensitivity analysis of an agro-economic model of management of irrigation water
NASA Astrophysics Data System (ADS)
El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse
2015-04-01
The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.
Combining multiple imputation and meta-analysis with individual participant data
Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M
2013-01-01
Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.
2014-01-01
This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio and number of control surfaces. A doublet lattice approach is taken to compute generalized forces. A rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. Although, all parameters can be easily modified if desired.The focus of this paper is on tool presentation, verification and validation. This process is carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool. Therefore the flutter speed and frequency for a clamped plate are computed using V-g and V-f analysis. The computational results are compared to a previously published computational analysis and wind tunnel results for the same structure. Finally a case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to V-g and V-f analysis. This also includes the analysis of the model in response to a 1-cos gust.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard J.; Mavris, Dimitri N.
2015-01-01
This paper introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this paper is on tool presentation, verification, and validation. These processes are carried out in stages throughout the paper. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
Rapid State Space Modeling Tool for Rectangular Wing Aeroservoelastic Studies
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Conyers, Howard Jason; Mavris, Dimitri N.
2015-01-01
This report introduces a modeling and simulation tool for aeroservoelastic analysis of rectangular wings with trailing-edge control surfaces. The inputs to the code are planform design parameters such as wing span, aspect ratio, and number of control surfaces. Using this information, the generalized forces are computed using the doublet-lattice method. Using Roger's approximation, a rational function approximation is computed. The output, computed in a few seconds, is a state space aeroservoelastic model which can be used for analysis and control design. The tool is fully parameterized with default information so there is little required interaction with the model developer. All parameters can be easily modified if desired. The focus of this report is on tool presentation, verification, and validation. These processes are carried out in stages throughout the report. The rational function approximation is verified against computed generalized forces for a plate model. A model composed of finite element plates is compared to a modal analysis from commercial software and an independently conducted experimental ground vibration test analysis. Aeroservoelastic analysis is the ultimate goal of this tool, therefore, the flutter speed and frequency for a clamped plate are computed using damping-versus-velocity and frequency-versus-velocity analysis. The computational results are compared to a previously published computational analysis and wind-tunnel results for the same structure. A case study of a generic wing model with a single control surface is presented. Verification of the state space model is presented in comparison to damping-versus-velocity and frequency-versus-velocity analysis, including the analysis of the model in response to a 1-cos gust.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
Moment method analysis of linearly tapered slot antennas
NASA Technical Reports Server (NTRS)
Koeksal, Adnan
1993-01-01
A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.
Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...
The analysis and modelling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1991-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
The analysis and modeling of dilatational terms in compressible turbulence
NASA Technical Reports Server (NTRS)
Sarkar, S.; Erlebacher, G.; Hussaini, M. Y.; Kreiss, H. O.
1989-01-01
It is shown that the dilatational terms that need to be modeled in compressible turbulence include not only the pressure-dilatation term but also another term - the compressible dissipation. The nature of these dilatational terms in homogeneous turbulence is explored by asymptotic analysis of the compressible Navier-Stokes equations. A non-dimensional parameter which characterizes some compressible effects in moderate Mach number, homogeneous turbulence is identified. Direct numerical simulations (DNS) of isotropic, compressible turbulence are performed, and their results are found to be in agreement with the theoretical analysis. A model for the compressible dissipation is proposed; the model is based on the asymptotic analysis and the direct numerical simulations. This model is calibrated with reference to the DNS results regarding the influence of compressibility on the decay rate of isotropic turbulence. An application of the proposed model to the compressible mixing layer has shown that the model is able to predict the dramatically reduced growth rate of the compressible mixing layer.
Evaluation of The Operational Benefits Versus Costs of An Automated Cargo Mover
2016-12-01
logistics footprint and life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically...life-cycle cost are presented as part of this report. Analysis of modeling and simulation results identified statistically significant differences...Error of Estimation. Source: Eskew and Lawler (1994). ...........................75 Figure 24. Load Results (100 Runs per Scenario
The effects of videotape modeling on staff acquisition of functional analysis methodology.
Moore, James W; Fisher, Wayne W
2007-01-01
Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape.
Current Results and Proposed Activities in Microgravity Fluid Dynamics
NASA Technical Reports Server (NTRS)
Polezhaev, V. I.
1996-01-01
The Institute for Problems in Mechanics' Laboratory work in mathematical and physical modelling of fluid mechanics develops models, methods, and software for analysis of fluid flow, instability analysis, direct numerical modelling and semi-empirical models of turbulence, as well as experimental research and verification of these models and their applications in technological fluid dynamics, microgravity fluid mechanics, geophysics, and a number of engineering problems. This paper presents an overview of the results in microgravity fluid dynamics research during the last two years. Nonlinear problems of weakly compressible and compressible fluid flows are discussed.
The Effects of Videotape Modeling on Staff Acquisition of Functional Analysis Methodology
Moore, James W; Fisher, Wayne W
2007-01-01
Lectures and two types of video modeling were compared to determine their relative effectiveness in training 3 staff members to conduct functional analysis sessions. Video modeling that contained a larger number of therapist exemplars resulted in mastery-level performance eight of the nine times it was introduced, whereas neither lectures nor partial video modeling produced significant improvements in performance. Results demonstrated that video modeling provided an effective training strategy but only when a wide range of exemplars of potential therapist behaviors were depicted in the videotape. PMID:17471805
NASA Astrophysics Data System (ADS)
Wray, Richard B.
1991-12-01
A hybrid requirements analysis methodology was developed, based on the practices actually used in developing a Space Generic Open Avionics Architecture. During the development of this avionics architecture, a method of analysis able to effectively define the requirements for this space avionics architecture was developed. In this methodology, external interfaces and relationships are defined, a static analysis resulting in a static avionics model was developed, operating concepts for simulating the requirements were put together, and a dynamic analysis of the execution needs for the dynamic model operation was planned. The systems engineering approach was used to perform a top down modified structured analysis of a generic space avionics system and to convert actual program results into generic requirements. CASE tools were used to model the analyzed system and automatically generate specifications describing the model's requirements. Lessons learned in the use of CASE tools, the architecture, and the design of the Space Generic Avionics model were established, and a methodology notebook was prepared for NASA. The weaknesses of standard real-time methodologies for practicing systems engineering, such as Structured Analysis and Object Oriented Analysis, were identified.
NASA Technical Reports Server (NTRS)
Wray, Richard B.
1991-01-01
A hybrid requirements analysis methodology was developed, based on the practices actually used in developing a Space Generic Open Avionics Architecture. During the development of this avionics architecture, a method of analysis able to effectively define the requirements for this space avionics architecture was developed. In this methodology, external interfaces and relationships are defined, a static analysis resulting in a static avionics model was developed, operating concepts for simulating the requirements were put together, and a dynamic analysis of the execution needs for the dynamic model operation was planned. The systems engineering approach was used to perform a top down modified structured analysis of a generic space avionics system and to convert actual program results into generic requirements. CASE tools were used to model the analyzed system and automatically generate specifications describing the model's requirements. Lessons learned in the use of CASE tools, the architecture, and the design of the Space Generic Avionics model were established, and a methodology notebook was prepared for NASA. The weaknesses of standard real-time methodologies for practicing systems engineering, such as Structured Analysis and Object Oriented Analysis, were identified.
Perandini, Simone; Soardi, Gian Alberto; Motton, Massimiliano; Rossi, Arianna; Signorini, Manuel; Montemezzi, Stefania
2016-09-01
The aim of this study was to compare classification results from four major risk prediction models in a wide population of incidentally detected solitary pulmonary nodules (SPNs) which were selected to crossmatch inclusion criteria for the selected models. A total of 285 solitary pulmonary nodules with a definitive diagnosis were evaluated by means of four major risk assessment models developed from non-screening populations, namely the Mayo, Gurney, PKUPH and BIMC models. Accuracy was evaluated by receiver operating characteristic (ROC) area under the curve (AUC) analysis. Each model's fitness to provide reliable help in decision analysis was primarily assessed by adopting a surgical threshold of 65 % and an observation threshold of 5 % as suggested by ACCP guidelines. ROC AUC values, false positives, false negatives and indeterminate nodules were respectively 0.775, 3, 8, 227 (Mayo); 0.794, 41, 6, 125 (Gurney); 0.889, 42, 0, 144 (PKUPH); 0.898, 16, 0, 118 (BIMC). Resultant data suggests that the BIMC model may be of greater help than Mayo, Gurney and PKUPH models in preoperative SPN characterization when using ACCP risk thresholds because of overall better accuracy and smaller numbers of indeterminate nodules and false positive results. • The BIMC and PKUPH models offer better characterization than older prediction models • Both the PKUPH and BIMC models completely avoided false negative results • The Mayo model suffers from a large number of indeterminate results.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Finite element modelling of crash response of composite aerospace sub-floor structures
NASA Astrophysics Data System (ADS)
McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.
Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.
Quadrupedal rodent gait compensations in a low dose monoiodoacetate model of osteoarthritis.
Lakes, Emily H; Allen, Kyle D
2018-06-01
Rodent gait analysis provides robust, quantitative results for preclinical musculoskeletal and neurological models. In prior work, surgical models of osteoarthritis have been found to result in a hind limb shuffle-stepping gait compensation, while a high dose monoiodoacetate (MIA, 3 mg) model resulted in a hind limb antalgic gait. However, it is unknown whether the antalgic gait caused by MIA is associated with severity of degeneration from the high dosage or the whole-joint degeneration associated with glycolysis inhibition. This study evaluates rodent gait changes resulting from a low dose, 1 mg unilateral intra-articular injection of MIA compared to saline injected and naïve rats. Spatiotemporal and dynamic gait parameters were collected from a total of 42 male Lewis rats spread across 3 time points: 1, 2, and 4 weeks post-injection. To provide a detailed analysis of this low dose MIA model, gait analysis was used to uniquely quantify both fore and hind limb gait parameters. Our data indicate that 1 mg of MIA caused relatively minor degeneration and a shuffle-step gait compensation, similar to the compensation observed in prior surgical models. These data from a 1 mg MIA model show a different gait compensation compared to a previously studied 3 mg model. This 1 mg MIA model resulted in gait compensations more similar to a previously studied surgical model of osteoarthritis. Additionally, this study provides detailed 4 limb analysis of rodent gait that includes spatiotemporal and dynamic data from the same gait trial. These data highlight the importance of measuring dynamic data in combination with spatiotemporal data, since compensatory gait patterns may not be captured by spatial, temporal, or dynamic characterizations alone. Copyright © 2018 Elsevier B.V. All rights reserved.
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Nonlinear Poisson Equation for Heterogeneous Media
Hu, Langhua; Wei, Guo-Wei
2012-01-01
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. PMID:22947937
High Fidelity System Simulation of Multiple Components in Support of the UEET Program
NASA Technical Reports Server (NTRS)
Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton
2006-01-01
The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo
2013-01-01
Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593
NASA Astrophysics Data System (ADS)
Tian, F.; Sivapalan, M.; Li, H.; Hu, H.
2007-12-01
The importance of diagnostic analysis of hydrological models is increasingly recognized by the scientific community (M. Sivapalan, et al., 2003; H. V. Gupta, et al., 2007). Model diagnosis refers to model structures and parameters being identified not only by statistical comparison of system state variables and outputs but also by process understanding in a specific watershed. Process understanding can be gained by the analysis of observational data and model results at the specific watershed as well as through regionalization. Although remote sensing technology can provide valuable data about the inputs, state variables, and outputs of the hydrological system, observational rainfall-runoff data still constitute the most accurate, reliable, direct, and thus a basic component of hydrology related database. One critical question in model diagnostic analysis is, therefore, what signature characteristic can we extract from rainfall and runoff data. To this date only a few studies have focused on this question, such as Merz et al. (2006) and Lana-Renault et al. (2007), still none of these studies related event analysis with model diagnosis in an explicit, rigorous, and systematic manner. Our work focuses on the identification of the dominant runoff generation mechanisms from event analysis of rainfall-runoff data, including correlation analysis and analysis of timing pattern. The correlation analysis involves the identification of the complex relationship among rainfall depth, intensity, runoff coefficient, and antecedent conditions, and the timing pattern analysis aims to identify the clustering pattern of runoff events in relation to the patterns of rainfall events. Our diagnostic analysis illustrates the changing pattern of runoff generation mechanisms in the DMIP2 test watersheds located in Oklahoma region, which is also well recognized by numerical simulations based on TsingHua Representative Elementary Watershed (THREW) model. The result suggests the usefulness of rainfall-runoff event analysis for model development as well as model diagnostics.
Numerical prediction of Pelton turbine efficiency
NASA Astrophysics Data System (ADS)
Jošt, D.; Mežnar, P.; Lipej, A.
2010-08-01
This paper presents a numerical analysis of flow in a 2 jet Pelton turbine with horizontal axis. The analysis was done for the model at several operating points in different operating regimes. The results were compared to the results of a test of the model. Analysis was performed using ANSYS CFX-12.1 computer code. A k-ω SST turbulent model was used. Free surface flow was modelled by two-phase homogeneous model. At first, a steady state analysis of flow in the distributor with two injectors was performed for several needle strokes. This provided us with data on flow energy losses in the distributor and the shape and velocity of jets. The second step was an unsteady analysis of the runner with jets. Torque on the shaft was then calculated from pressure distribution data. Averaged torque values are smaller than measured ones. Consequently, calculated turbine efficiency is also smaller than the measured values, the difference is about 4 %. The shape of the efficiency diagram conforms well to the measurements.
Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
Radom, Marcin; Rybarczyk, Agnieszka; Szawulak, Bartlomiej; Andrzejewski, Hubert; Chabelski, Piotr; Kozak, Adam; Formanowicz, Piotr
2017-12-01
Model development and its analysis is a fundamental step in systems biology. The theory of Petri nets offers a tool for such a task. Since the rapid development of computer science, a variety of tools for Petri nets emerged, offering various analytical algorithms. From this follows a problem of using different programs to analyse a single model. Many file formats and different representations of results make the analysis much harder. Especially for larger nets the ability to visualize the results in a proper form provides a huge help in the understanding of their significance. We present a new tool for Petri nets development and analysis called Holmes. Our program contains algorithms for model analysis based on different types of Petri nets, e.g. invariant generator, Maximum Common Transitions (MCT) sets and cluster modules, simulation algorithms or knockout analysis tools. A very important feature is the ability to visualize the results of almost all analytical modules. The integration of such modules into one graphical environment allows a researcher to fully devote his or her time to the model building and analysis. Available at http://www.cs.put.poznan.pl/mradom/Holmes/holmes.html. piotr@cs.put.poznan.pl. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Monir, Md. Mamun; Zhu, Jun
2017-01-01
Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101
Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi
2017-01-01
Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814
Portable Life Support Subsystem Thermal Hydraulic Performance Analysis
NASA Technical Reports Server (NTRS)
Barnes, Bruce; Pinckney, John; Conger, Bruce
2010-01-01
This paper presents the current state of the thermal hydraulic modeling efforts being conducted for the Constellation Space Suit Element (CSSE) Portable Life Support Subsystem (PLSS). The goal of these efforts is to provide realistic simulations of the PLSS under various modes of operation. The PLSS thermal hydraulic model simulates the thermal, pressure, flow characteristics, and human thermal comfort related to the PLSS performance. This paper presents modeling approaches and assumptions as well as component model descriptions. Results from the models are presented that show PLSS operations at steady-state and transient conditions. Finally, conclusions and recommendations are offered that summarize results, identify PLSS design weaknesses uncovered during review of the analysis results, and propose areas for improvement to increase model fidelity and accuracy.
NASA Astrophysics Data System (ADS)
Al-Baarri, A. N.; Legowo, A. M.; Widayat
2018-01-01
D-glucose has been understood to provide the various effect on the reactivity in Maillard reaction resulting in the changes in physical performance of food product. Therefore this research was done to analyse physical appearance of Maillard reaction product made of D-glucose and methionine as a model system. The changes in browning value and spectral analysis model system were determined. The glucose-methionine model system was produced through the heating treatment at 50°C and RH 70% for 24 hours. The data were collected for every three hour using spectrophotometer. As result, browning value was elevated with the increase of heating time and remarkably high if compare to the D-glucose only. Furthermore, the spectral analysis showed that methionine turned the pattern of peak appearance. As conclusion, methionine raised the browning value and changed the pattern of spectral analysis in Maillard reaction model system.
NASA Astrophysics Data System (ADS)
Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik
2018-03-01
Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.
NASA Astrophysics Data System (ADS)
Islam, Muhammad Rabiul; Sakib-Ul-Alam, Md.; Nazat, Kazi Kaarima; Hassan, M. Munir
2017-12-01
FEA results greatly depend on analysis parameters. MSC NASTRAN nonlinear implicit analysis code has been used in large deformation finite element analysis of pitted marine SM490A steel rectangular plate. The effect of two types actual pit shape on parameters of integrity of structure has been analyzed. For 3-D modeling, a proposed method for simulation of pitted surface by probabilistic corrosion model has been used. The result has been verified with the empirical formula proposed by finite element analysis of steel surface generated with different pitted data where analyses have been carried out by the code of LS-DYNA 971. In the both solver, an elasto-plastic material has been used where an arbitrary stress versus strain curve can be defined. In the later one, the material model is based on the J2 flow theory with isotropic hardening where a radial return algorithm is used. The comparison shows good agreement between the two results which ensures successful simulation with comparatively less energy and time.
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
An interactive graphics system to facilitate finite element structural analysis
NASA Technical Reports Server (NTRS)
Burk, R. C.; Held, F. H.
1973-01-01
The characteristics of an interactive graphics systems to facilitate the finite element method of structural analysis are described. The finite element model analysis consists of three phases: (1) preprocessing (model generation), (2) problem solution, and (3) postprocessing (interpretation of results). The advantages of interactive graphics to finite element structural analysis are defined.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639
Reusable Rocket Engine Operability Modeling and Analysis
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Komar, D. R.
1998-01-01
This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.
Neural system modeling and simulation using Hybrid Functional Petri Net.
Tang, Yin; Wang, Fei
2012-02-01
The Petri net formalism has been proved to be powerful in biological modeling. It not only boasts of a most intuitive graphical presentation but also combines the methods of classical systems biology with the discrete modeling technique. Hybrid Functional Petri Net (HFPN) was proposed specially for biological system modeling. An array of well-constructed biological models using HFPN yielded very interesting results. In this paper, we propose a method to represent neural system behavior, where biochemistry and electrical chemistry are both included using the Petri net formalism. We built a model for the adrenergic system using HFPN and employed quantitative analysis. Our simulation results match the biological data well, showing that the model is very effective. Predictions made on our model further manifest the modeling power of HFPN and improve the understanding of the adrenergic system. The file of our model and more results with their analysis are available in our supplementary material.
[Numerical simulation of the effect of virtual stent release pose on the expansion results].
Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike
2018-04-01
The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.
NASA Technical Reports Server (NTRS)
Levy, A.; Zalesak, J.; Bernstein, M.; Mason, P. W.
1974-01-01
A NASTRAN analysis of the solid rocket booster (SRB) substructure of the space shuttle 1/8-scale structural dynamics model. The NASTRAN finite element modeling capability was first used to formulate a model of a cylinder 10 in. radius by a 200 in. length to investigate the accuracy and adequacy of the proposed grid point spacing. Results were compared with a shell analysis and demonstrated relatively accurate results for NASTRAN for the lower modes, which were of primary interest. A finite element model of the full SRB was then formed using CQUAD2 plate elements containing membrane and bending stiffness and CBAR offset bar elements to represent the longerons and frames. Three layers of three-dimensional CHEXAI elements were used to model the propellant. This model, consisting of 4000 degrees of freedom (DOF) initially, was reduced to 176 DOF using Guyan reduction. The model was then submitted for complex Eigenvalue analysis. After experiencing considerable difficulty with attempts to run the complete model, it was split into two substructres. These were run separately and combined into a single 116 degree of freedom A set which was successfully run. Results are reported.
A Generalized Partial Credit Model: Application of an EM Algorithm.
ERIC Educational Resources Information Center
Muraki, Eiji
1992-01-01
The partial credit model with a varying slope parameter is developed and called the generalized partial credit model (GPCM). Analysis results for simulated data by this and other polytomous item-response models demonstrate that the rating formulation of the GPCM is adaptable to the analysis of polytomous item responses. (SLD)
Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis
ERIC Educational Resources Information Center
Young, Cristobal; Holsteen, Katherine
2017-01-01
Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…
ERIC Educational Resources Information Center
Nee, John G.; Kare, Audhut P.
1987-01-01
Explores several concepts in computer assisted design/computer assisted manufacturing (CAD/CAM). Defines, evaluates, reviews and compares advanced computer-aided geometric modeling and analysis techniques. Presents the results of a survey to establish the capabilities of minicomputer based-systems with the CAD/CAM packages evaluated. (CW)
Sensitivity Analysis of Launch Vehicle Debris Risk Model
NASA Technical Reports Server (NTRS)
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Factor Analysis of Drawings: Application to college student models of the greenhouse effect
NASA Astrophysics Data System (ADS)
Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel
2015-09-01
Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance, suggesting that 4 archetype models of the greenhouse effect dominate thinking within this population. Factor scores, indicating the extent to which each student's drawing aligned with representative models, were compared to performance on conceptual understanding and attitudes measures, demographics, and non-cognitive features of drawings. Student drawings were also compared to drawings made by scientists to ascertain the extent to which models reflect more sophisticated and accurate models. Results indicate that student and scientist drawings share some similarities, most notably the presence of some features of the most sophisticated non-scientific model held among the study population. Prior knowledge, prior attitudes, gender, and non-cognitive components are also predictive of an individual student's model. This work presents a new technique for analyzing drawings, with general implications for the use of drawings in investigating student conceptions.
NASTRAN applications to aircraft propulsion systems
NASA Technical Reports Server (NTRS)
White, J. L.; Beste, D. L.
1975-01-01
The use of NASTRAN in propulsion system structural integration analysis is described. Computer support programs for modeling, substructuring, and plotting analysis results are discussed. Requirements on interface information and data exchange by participants in a NASTRAN substructure analysis are given. Static and normal modes vibration analysis results are given with comparison to test and other analytical results.
Wang, Gaoqi; Zhang, Song; Bian, Cuirong; Kong, Hui
2016-01-01
The purpose of the study was to verify the finite element analysis model of three-unite fixed partial denture with in vitro electronic strain analysis and analyze clinical situation with the verified model. First, strain gauges were attached to the critical areas of a three-unit fixed partial denture. Strain values were measured under 300 N load perpendicular to the occlusal plane. Secondly, a three-dimensional finite element model in accordance with the electronic strain analysis experiment was constructed from the scanning data. And the strain values obtained by finite element analysis and in vitro measurements were compared. Finally, the clinical destruction of the fixed partial denture was evaluated with the verified finite element analysis model. There was a mutual agreement and consistency between the finite element analysis results and experimental data. The finite element analysis revealed that failure will occur in the veneer layer on buccal surface of the connector under occlusal force of 570 N. The results indicate that the electronic strain analysis is an appropriate and cost saving method to verify the finite element model. The veneer layer on buccal surface of the connector is the weakest area in the fixed partial denture. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.
2012-01-01
The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
Automated Loads Analysis System (ATLAS)
NASA Technical Reports Server (NTRS)
Gardner, Stephen; Frere, Scot; O’Reilly, Patrick
2013-01-01
ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henager, Charles H.; Nguyen, Ba Nghiep; Kurtz, Richard J.
2016-03-31
Finite element continuum damage models (FE-CDM) have been developed to simulate and model dual-phase joints and cracked joints for improved analysis of SiC materials in nuclear environments. This report extends the analysis from the last reporting cycle by including results from dual-phase models and from cracked joint models.
NASA Astrophysics Data System (ADS)
Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo
2017-04-01
In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.
Comprehensive analysis of a Metabolic Model for lipid production in Rhodosporidium toruloides.
Castañeda, María Teresita; Nuñez, Sebastián; Garelli, Fabricio; Voget, Claudio; Battista, Hernán De
2018-05-19
The yeast Rhodosporidium toruloides has been extensively studied for its application in biolipid production. The knowledge of its metabolism capabilities and the application of constraint-based flux analysis methodology provide useful information for process prediction and optimization. The accuracy of the resulting predictions is highly dependent on metabolic models. A metabolic reconstruction for R. toruloides metabolism has been recently published. On the basis of this model, we developed a curated version that unblocks the central nitrogen metabolism and, in addition, completes charge and mass balances in some reactions neglected in the former model. Then, a comprehensive analysis of network capability was performed with the curated model and compared with the published metabolic reconstruction. The flux distribution obtained by lipid optimization with Flux Balance Analysis was able to replicate the internal biochemical changes that lead to lipogenesis in oleaginous microorganisms. These results motivate the development of a genome-scale model for complete elucidation of R. toruloides metabolism. Copyright © 2018 Elsevier B.V. All rights reserved.
Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui
2016-01-01
In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.
NASA Technical Reports Server (NTRS)
Lallemand, Pierre; Luo, Li-Shi
2000-01-01
The generalized hydrodynamics (the wave vector dependence of the transport coefficients) of a generalized lattice Boltzmann equation (LBE) is studied in detail. The generalized lattice Boltzmann equation is constructed in moment space rather than in discrete velocity space. The generalized hydrodynamics of the model is obtained by solving the dispersion equation of the linearized LBE either analytically by using perturbation technique or numerically. The proposed LBE model has a maximum number of adjustable parameters for the given set of discrete velocities. Generalized hydrodynamics characterizes dispersion, dissipation (hyper-viscosities), anisotropy, and lack of Galilean invariance of the model, and can be applied to select the values of the adjustable parameters which optimize the properties of the model. The proposed generalized hydrodynamic analysis also provides some insights into stability and proper initial conditions for LBE simulations. The stability properties of some 2D LBE models are analyzed and compared with each other in the parameter space of the mean streaming velocity and the viscous relaxation time. The procedure described in this work can be applied to analyze other LBE models. As examples, LBE models with various interpolation schemes are analyzed. Numerical results on shear flow with an initially discontinuous velocity profile (shock) with or without a constant streaming velocity are shown to demonstrate the dispersion effects in the LBE model; the results compare favorably with our theoretical analysis. We also show that whereas linear analysis of the LBE evolution operator is equivalent to Chapman-Enskog analysis in the long wave-length limit (wave vector k = 0), it can also provide results for large values of k. Such results are important for the stability and other hydrodynamic properties of the LBE method and cannot be obtained through Chapman-Enskog analysis.
Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations
NASA Astrophysics Data System (ADS)
Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad
2012-11-01
This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter
1993-01-01
This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.
Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey
Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.
2014-01-01
We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Modelling the degree of porosity of the ceramic surface intended for implants.
Stach, Sebastian; Kędzia, Olga; Garczyk, Żaneta; Wróbel, Zygmunt
2018-05-18
The main goal of the study was to develop a model of the degree of surface porosity of a biomaterial intended for implants. The model was implemented using MATLAB. A computer simulation was carried out based on the developed model, which resulted in a two-dimensional image of the modelled surface. Then, an algorithm for computerised image analysis of the surface of the actual oxide bioceramic layer was developed, which enabled determining its degree of porosity. In order to obtain the confocal micrographs of a few areas of the biomaterial, measurements were performed using the LEXT OLS4000 confocal laser microscope. The image analysis was carried out using MountainsMap Premium and SPIP. The obtained results allowed determining the input parameters of the program, on the basis of which porous biomaterial surface images were generated. The last part of the study involved verification of the developed model. The modelling method was tested by comparing the obtained results with the experimental data obtained from the analysis of surface images of the test material.
Review and developments of dissemination models for airborne carbon fibers
NASA Technical Reports Server (NTRS)
Elber, W.
1980-01-01
Dissemination prediction models were reviewed to determine their applicability to a risk assessment for airborne carbon fibers. The review showed that the Gaussian prediction models using partial reflection at the ground agreed very closely with a more elaborate diffusion analysis developed for the study. For distances beyond 10,000 m the Gaussian models predicted a slower fall-off in exposure levels than the diffusion models. This resulting level of conservatism was preferred for the carbon fiber risk assessment. The results also showed that the perfect vertical-mixing models developed herein agreed very closely with the diffusion analysis for all except the most stable atmospheric conditions.
NASA Technical Reports Server (NTRS)
Hall, Laverne
1995-01-01
Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.
NASA Technical Reports Server (NTRS)
Baron, S.; Levison, W. H.
1977-01-01
Application of the optimal control model of the human operator to problems in display analysis is discussed. Those aspects of the model pertaining to the operator-display interface and to operator information processing are reviewed and discussed. The techniques are then applied to the analysis of advanced display/control systems for a Terminal Configured Vehicle. Model results are compared with those obtained in a large, fixed-base simulation.
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
Model-Derived Global Aerosol Climatology for MISR Analysis ("Clim-Likely" Data Set)
Atmospheric Science Data Center
2018-04-19
Model-Derived Global Aerosol Climatology for MISR Analysis Multi-angle Imaging ... (MISR) monthly, global 1° x 1° "Clim-Likely" aerosol climatology, derived from 'typical-year' aerosol transport model results are available for individual 1° x 1° boxes or ...
An effective convolutional neural network model for Chinese sentiment analysis
NASA Astrophysics Data System (ADS)
Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong
2017-06-01
Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
1976-03-01
This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is
Factor Analysis of Drawings: Application to College Student Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel
2015-01-01
Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance,…
A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.
ERIC Educational Resources Information Center
Harrop, John W.; Velicer, Wayne F.
1985-01-01
Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)
Renewable Energy Deployment in Colorado and the West: A Modeling Sensitivity and GIS Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrows, Clayton; Mai, Trieu; Haase, Scott
2016-03-01
The Resource Planning Model is a capacity expansion model designed for a regional power system, such as a utility service territory, state, or balancing authority. We apply a geospatial analysis to Resource Planning Model renewable energy capacity expansion results to understand the likelihood of renewable development on various lands within Colorado.
Decision modeling for fire incident analysis
Donald G. MacGregor; Armando González-Cabán
2009-01-01
This paper reports on methods for representing and modeling fire incidents based on concepts and models from the decision and risk sciences. A set of modeling techniques are used to characterize key fire management decision processes and provide a basis for incident analysis. The results of these methods can be used to provide insights into the structure of fire...
Elastic-plastic models for multi-site damage
NASA Technical Reports Server (NTRS)
Actis, Ricardo L.; Szabo, Barna A.
1994-01-01
This paper presents recent developments in advanced analysis methods for the computation of stress site damage. The method of solution is based on the p-version of the finite element method. Its implementation was designed to permit extraction of linear stress intensity factors using a superconvergent extraction method (known as the contour integral method) and evaluation of the J-integral following an elastic-plastic analysis. Coarse meshes are adequate for obtaining accurate results supported by p-convergence data. The elastic-plastic analysis is based on the deformation theory of plasticity and the von Mises yield criterion. The model problem consists of an aluminum plate with six equally spaced holes and a crack emanating from each hole. The cracks are of different sizes. The panel is subjected to a remote tensile load. Experimental results are available for the panel. The plasticity analysis provided the same limit load as the experimentally determined load. The results of elastic-plastic analysis were compared with the results of linear elastic analysis in an effort to evaluate how plastic zone sizes influence the crack growth rates. The onset of net-section yielding was determined also. The results show that crack growth rate is accelerated by the presence of adjacent damage, and the critical crack size is shorter when the effects of plasticity are taken into consideration. This work also addresses the effects of alternative stress-strain laws: The elastic-ideally-plastic material model is compared against the Ramberg-Osgood model.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Pates, Carl S., III
1994-01-01
A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.
Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less
Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.; ...
2017-07-28
Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less
NASA Astrophysics Data System (ADS)
Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.
2012-05-01
The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Nonlinear Poisson equation for heterogeneous media.
Hu, Langhua; Wei, Guo-Wei
2012-08-22
The Poisson equation is a widely accepted model for electrostatic analysis. However, the Poisson equation is derived based on electric polarizations in a linear, isotropic, and homogeneous dielectric medium. This article introduces a nonlinear Poisson equation to take into consideration of hyperpolarization effects due to intensive charges and possible nonlinear, anisotropic, and heterogeneous media. Variational principle is utilized to derive the nonlinear Poisson model from an electrostatic energy functional. To apply the proposed nonlinear Poisson equation for the solvation analysis, we also construct a nonpolar solvation energy functional based on the nonlinear Poisson equation by using the geometric measure theory. At a fixed temperature, the proposed nonlinear Poisson theory is extensively validated by the electrostatic analysis of the Kirkwood model and a set of 20 proteins, and the solvation analysis of a set of 17 small molecules whose experimental measurements are also available for a comparison. Moreover, the nonlinear Poisson equation is further applied to the solvation analysis of 21 compounds at different temperatures. Numerical results are compared to theoretical prediction, experimental measurements, and those obtained from other theoretical methods in the literature. A good agreement between our results and experimental data as well as theoretical results suggests that the proposed nonlinear Poisson model is a potentially useful model for electrostatic analysis involving hyperpolarization effects. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Subhasish; Soppet, William; Majumdar, Saurin
This report provides an update on an assessment of environmentally assisted fatigue for light water reactor components under extended service conditions. This report is a deliverable in April 2015 under the work package for environmentally assisted fatigue under DOE's Light Water Reactor Sustainability program. In this report, updates are discussed related to a system level preliminary finite element model of a two-loop pressurized water reactor (PWR). Based on this model, system-level heat transfer analysis and subsequent thermal-mechanical stress analysis were performed for typical design-basis thermal-mechanical fatigue cycles. The in-air fatigue lives of components, such as the hot and cold legs,more » were estimated on the basis of stress analysis results, ASME in-air fatigue life estimation criteria, and fatigue design curves. Furthermore, environmental correction factors and associated PWR environment fatigue lives for the hot and cold legs were estimated by using estimated stress and strain histories and the approach described in NUREG-6909. The discussed models and results are very preliminary. Further advancement of the discussed model is required for more accurate life prediction of reactor components. This report only presents the work related to finite element modelling activities. However, in between multiple tensile and fatigue tests were conducted. The related experimental results will be presented in the year-end report.« less
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
An analysis of the Petri net based model of the human body iron homeostasis process.
Sackmann, Andrea; Formanowicz, Dorota; Formanowicz, Piotr; Koch, Ina; Blazewicz, Jacek
2007-02-01
In the paper a Petri net based model of the human body iron homeostasis is presented and analyzed. The body iron homeostasis is an important but not fully understood complex process. The modeling of the process presented in the paper is expressed in the language of Petri net theory. An application of this theory to the description of biological processes allows for very precise analysis of the resulting models. Here, such an analysis of the body iron homeostasis model from a mathematical point of view is given.
Brazilian Soybean Production: Emergy Analysis with an Expanded Scope
ERIC Educational Resources Information Center
Ortega, Enrique; Cavalett, Otavio; Bonifacio, Robert; Watanabe, Marcos
2005-01-01
This article offers the results of emergy analysis used to evaluate four different soybean production systems in Brazil that were divided into two main categories: biological models (organic and ecological farms) and industrial models (green-revolution chemical farms and herbicide with no-tillage farms). The biological models show better…
An uncertainty analysis of wildfire modeling [Chapter 13
Karin Riley; Matthew Thompson
2017-01-01
Before fire models can be understood, evaluated, and effectively applied to support decision making, model-based uncertainties must be analyzed. In this chapter, we identify and classify sources of uncertainty using an established analytical framework, and summarize results graphically in an uncertainty matrix. Our analysis facilitates characterization of the...
Comparative analysis of zonal systems for macro-level crash modeling.
Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen
2017-06-01
Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
Tabatabaie, Seyed Mohammad Hossein; Bolte, John P; Murthy, Ganti S
2018-06-01
The goal of this study was to integrate a crop model, DNDC (DeNitrification-DeComposition), with life cycle assessment (LCA) and economic analysis models using a GIS-based integrated platform, ENVISION. The integrated model enables LCA practitioners to conduct integrated economic analysis and LCA on a regional scale while capturing the variability of soil emissions due to variation in regional factors during production of crops and biofuel feedstocks. In order to evaluate the integrated model, the corn-soybean cropping system in Eagle Creek Watershed, Indiana was studied and the integrated model was used to first model the soil emissions and then conduct the LCA as well as economic analysis. The results showed that the variation in soil emissions due to variation in weather is high causing some locations to be carbon sink in some years and source of CO 2 in other years. In order to test the model under different scenarios, two tillage scenarios were defined: 1) conventional tillage (CT) and 2) no tillage (NT) and analyzed with the model. The overall GHG emissions for the corn-soybean cropping system was simulated and results showed that the NT scenario resulted in lower soil GHG emissions compared to CT scenario. Moreover, global warming potential (GWP) of corn ethanol from well to pump varied between 57 and 92gCO 2 -eq./MJ while GWP under the NT system was lower than that of the CT system. The cost break-even point was calculated as $3612.5/ha in a two year corn-soybean cropping system and the results showed that under low and medium prices for corn and soybean most of the farms did not meet the break-even point. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokół, J. M.; Kubiak, M. A.; Bzowski, M.
We have developed a refined and optimized version of the Warsaw Test Particle Model of interstellar neutral gas in the heliosphere, specially tailored for analysis of IBEX-Lo observations. The former version of the model was used in the analysis of neutral He observed by IBEX that resulted in an unexpected conclusion that the interstellar neutral He flow vector was different than previously thought and that a new population of neutral He, dubbed the Warm Breeze, exists in the heliosphere. It was also used in the reanalysis of Ulysses observations that confirmed the original findings on the flow vector, but suggestedmore » a significantly higher temperature. The present version of the model has two strains targeted for different applications, based on an identical paradigm, but differing in the implementation and in the treatment of ionization losses. We present the model in detail and discuss numerous effects related to the measurement process that potentially modify the resulting flux of ISN He observed by IBEX, and identify those of them that should not be omitted in the simulations to avoid biasing the results. This paper is part of a coordinated series of papers presenting the current state of analysis of IBEX-Lo observations of ISN He. Details of the analysis method are presented by Swaczyna et al. and results of the analysis are presented by Bzowski et al.« less
Equivalent plate modeling for conceptual design of aircraft wing structures
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1995-01-01
This paper describes an analysis method that generates conceptual-level design data for aircraft wing structures. A key requirement is that this data must be produced in a timely manner so that is can be used effectively by multidisciplinary synthesis codes for performing systems studies. Such a capability is being developed by enhancing an equivalent plate structural analysis computer code to provide a more comprehensive, robust and user-friendly analysis tool. The paper focuses on recent enhancements to the Equivalent Laminated Plate Solution (ELAPS) analysis code that significantly expands the modeling capability and improves the accuracy of results. Modeling additions include use of out-of-plane plate segments for representing winglets and advanced wing concepts such as C-wings along with a new capability for modeling the internal rib and spar structure. The accuracy of calculated results is improved by including transverse shear effects in the formulation and by using multiple sets of assumed displacement functions in the analysis. Typical results are presented to demonstrate these new features. Example configurations include a C-wing transport aircraft, a representative fighter wing and a blended-wing-body transport. These applications are intended to demonstrate and quantify the benefits of using equivalent plate modeling of wing structures during conceptual design.
Numerical bifurcation analysis of immunological models with time delays
NASA Astrophysics Data System (ADS)
Luzyanina, Tatyana; Roose, Dirk; Bocharov, Gennady
2005-12-01
In recent years, a large number of mathematical models that are described by delay differential equations (DDEs) have appeared in the life sciences. To analyze the models' dynamics, numerical methods are necessary, since analytical studies can only give limited results. In turn, the availability of efficient numerical methods and software packages encourages the use of time delays in mathematical modelling, which may lead to more realistic models. We outline recently developed numerical methods for bifurcation analysis of DDEs and illustrate the use of these methods in the analysis of a mathematical model of human hepatitis B virus infection.
A Model Comparison for Characterizing Protein Motions from Structure
NASA Astrophysics Data System (ADS)
David, Charles; Jacobs, Donald
2011-10-01
A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics.
An analysis of urban collisions using an artificial intelligence model.
Mussone, L; Ferrari, A; Oneta, M
1999-11-01
Traditional studies on road accidents estimate the effect of variables (such as vehicular flows, road geometry, vehicular characteristics), and the calculation of the number of accidents. A descriptive statistical analysis of the accidents (those used in the model) over the period 1992-1995 is proposed. The paper describes an alternative method based on the use of artificial neural networks (ANN) in order to work out a model that relates to the analysis of vehicular accidents in Milan. The degree of danger of urban intersections using different scenarios is quantified by the ANN model. Methodology is the first result, which allows us to tackle the modelling of urban vehicular accidents by the innovative use of ANN. Other results deal with model outputs: intersection complexity may determine a higher accident index depending on the regulation of intersection. The highest index for running over of pedestrian occurs at non-signalised intersections at night-time.
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1984-01-01
A detailed review of 110 of the 263 Region B/C images of the 1981 data set is undertaken and a preliminary assessment of 39 images of the 1976-79 data set is presented. The basic spatial characteristics of these images are discussed. Modeling analysis of these images after further data processing will provide useful information about Io and the planetary magnetosphere. Plans for data processing and modeling analysis are outlined. Results of very preliminary modeling activities are presented.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The NASA Structural Analysis System (NASTRAN) Model 1 finite element idealization, input data, and detailed analytical results are presented. The data presented include: substructuring analysis for normal modes, plots of member data, plots of symmetric free-free modes, plots of antisymmetric free-free modes, analysis of the wing, analysis of the cargo doors, analysis of the payload, and analysis of the orbiter.
Application of CAD/CAE class systems to aerodynamic analysis of electric race cars
NASA Astrophysics Data System (ADS)
Grabowski, L.; Baier, A.; Buchacz, A.; Majzner, M.; Sobek, M.
2015-11-01
Aerodynamics is one of the most important factors which influence on every aspect of a design of a car and car driving parameters. The biggest influence aerodynamics has on design of a shape of a race car body, especially when the main objective of the race is the longest distance driven in period of time, which can not be achieved without low energy consumption and low drag of a car. Designing shape of the vehicle body that must generate the lowest possible drag force, without compromising the other parameters of the drive. In the article entitled „Application of CAD/CAE class systems to aerodynamic analysis of electric race cars” are being presented problems solved by computer analysis of cars aerodynamics and free form modelling. Analysis have been subjected to existing race car of a Silesian Greenpower Race Team. On a basis of results of analysis of existence of Kammback aerodynamic effect innovative car body were modeled. Afterwards aerodynamic analysis were performed to verify existence of aerodynamic effect for innovative shape and to recognize aerodynamics parameters of the shape. Analysis results in the values of coefficients and aerodynamic drag forces. The resulting drag forces Fx, drag coefficients Cx(Cd) and aerodynamic factors Cx*A allowed to compare all of the shapes to each other. Pressure distribution, air velocities and streams courses were useful in determining aerodynamic features of analyzed shape. For aerodynamic tests was used Ansys Fluent CFD software. In a paper the ways of surface modeling with usage of Realize Shape module and classic surface modeling were presented. For shapes modeling Siemens NX 9.0 software was used. Obtained results were used to estimation of existing shapes and to make appropriate conclusions.
NASA Astrophysics Data System (ADS)
Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.
2015-11-01
A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.
NASA Astrophysics Data System (ADS)
Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.
2018-02-01
Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.
Alarcón, Tomás; Marches, Radu; Page, Karen M
2006-05-07
We formulate models of the mechanism(s) by which B cell lymphoma cells stimulated with an antibody specific to the B cell receptor (IgM) become quiescent or apoptotic. In particular, we aim to reproduce experimental results by Marches et al. according to which the fate of the targeted cells (Daudi) depends on the levels of expression of p21(Waf1) (p21) cell-cycle inhibitor. A simple model is formulated in which the basic ingredients are p21 and caspase activity, and their mutual inhibition. We show that this model does not reproduce the experimental results and that further refinement is needed. A second model successfully reproduces the experimental observations, for a given set of parameter values, indicating a critical role for Myc in the fate decision process. We use bifurcation analysis and objective sensitivity analysis to assess the robustness of our results. Importantly, this analysis yields experimentally testable predictions on the role of Myc, which could have therapeutic implications.
An extended car-following model to describe connected traffic dynamics under cyberattacks
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Yu, Guizhen; Wu, Xinkai; Qin, Hongmao; Wang, Yunpeng
2018-04-01
In this paper, the impacts of the potential cyberattacks on vehicles are modeled through an extended car-following model. To better understand the mechanism of traffic disturbance under cyberattacks, the linear and nonlinear stability analysis are conducted respectively. Particularly, linear stability analysis is performed to obtain different neutral stability conditions with various parameters; and nonlinear stability analysis is carried out by using reductive perturbation method to derive the soliton solution of the modified Korteweg de Vries equation (mKdV) near the critical point, which is used to draw coexisting stability lines. Furthermore, by applying linear and nonlinear stability analysis, traffic flow state can be divided into three states, i.e., stable, metastable and unstable states which are useful to describe shockwave dynamics and driving behaviors under cyberattacks. The theoretical results show that the proposed car-following model is capable of successfully describing the car-following behavior of connected vehicles with cyberattacks. Finally, numerical simulation using real values has confirmed the validity of theoretical analysis. The results further demonstrate our model can be used to help avoid collisions and relieve traffic congestion with cybersecurity threats.
Moderation analysis using a two-level regression model.
Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott
2014-10-01
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua
2017-10-01
Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
Model wall and recovery temperature effects on experimental heat transfer data analysis
NASA Technical Reports Server (NTRS)
Throckmorton, D. A.; Stone, D. R.
1974-01-01
Basic analytical procedures are used to illustrate, both qualitatively and quantitatively, the relative impact upon heat transfer data analysis of certain factors which may affect the accuracy of experimental heat transfer data. Inaccurate knowledge of adiabatic wall conditions results in a corresponding inaccuracy in the measured heat transfer coefficient. The magnitude of the resulting error is extreme for data obtained at wall temperatures approaching the adiabatic condition. High model wall temperatures and wall temperature gradients affect the level and distribution of heat transfer to an experimental model. The significance of each of these factors is examined and its impact upon heat transfer data analysis is assessed.
Scientific Ballooning Technologies Workshop STO-2 Thermal Design and Analysis
NASA Technical Reports Server (NTRS)
Ferguson, Doug
2016-01-01
The heritage thermal model for the full STO-2 (Stratospheric Terahertz Observatory II), vehicle has been updated to model the CSBF (Columbia Scientific Balloon Facility) SIP-14 (Scientific Instrument Package) in detail. Analysis of this model has been performed for the Antarctica FY2017 launch season. Model temperature predictions are compared to previous results from STO-2 review documents.
NASA Technical Reports Server (NTRS)
Howland, G. R.; Durno, J. A.; Twomey, W. J.
1990-01-01
Sikorsky Aircraft, together with the other major helicopter airframe manufacturers, is engaged in a study to improve the use of finite element analysis to predict the dynamic behavior of helicopter airframes, under a rotorcraft structural dynamics program called DAMVIBS (Design Analysis Methods for VIBrationS), sponsored by the NASA-Langley. The test plan and test results are presented for a shake test of the UH-60A BLACK HAWK helicopter. A comparison is also presented of test results with results obtained from analysis using a NASTRAN finite element model.
Segregation analysis of cryptogenic epilepsy and an empirical test of the validity of the results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ottman, R.; Hauser, W.A.; Barker-Cummings, C.
1997-03-01
We used POINTER to perform segregation analysis of crytogenic epilepsy in 1,557 three-generation families (probands and their parents, siblings, and offspring) ascertained from voluntary organizations. Analysis of the full data set indicated that the data were most consistent with an autosomal dominant (AD) model with 61% penetrance of the susceptibility gene. However, subsequent analyses revealed that the patterns of familial aggregation differed markedly between siblings and offspring of the probands. Risks in siblings were consistent with an autosomal recessive (AR) model and inconsistent with an AD model, whereas risks in offspring were inconsistent with an AR model and more consistentmore » with an AD model. As a further test of the validity of the AD model, we used sequential ascertainment to extend the family history information in the subset of families judged likely to carry the putative susceptibility gene because they contained at least three affected individuals. Prevalence of idiopathic/cryptogenic epilepsy was only 3.7% in newly identified relatives expected to have a 50% probability of carrying the susceptibility gene under an AD model. Approximately 30% (i.e., 50% X 61%) were expected to be affected under the AD model resulting from the segregation analysis. These results suggest that the familial distribution of cryptogenic epilepsy is inconsistent with any conventional genetic model. The differences between siblings and offspring in the patterns of familial risk are intriguing and should be investigated further. 28 refs., 6 tabs.« less
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
2013-09-30
accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis
Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.
NASA Astrophysics Data System (ADS)
Keating, Elizabeth H.; Doherty, John; Vrugt, Jasper A.; Kang, Qinjun
2010-10-01
Highly parameterized and CPU-intensive groundwater models are increasingly being used to understand and predict flow and transport through aquifers. Despite their frequent use, these models pose significant challenges for parameter estimation and predictive uncertainty analysis algorithms, particularly global methods which usually require very large numbers of forward runs. Here we present a general methodology for parameter estimation and uncertainty analysis that can be utilized in these situations. Our proposed method includes extraction of a surrogate model that mimics key characteristics of a full process model, followed by testing and implementation of a pragmatic uncertainty analysis technique, called null-space Monte Carlo (NSMC), that merges the strengths of gradient-based search and parameter dimensionality reduction. As part of the surrogate model analysis, the results of NSMC are compared with a formal Bayesian approach using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. Such a comparison has never been accomplished before, especially in the context of high parameter dimensionality. Despite the highly nonlinear nature of the inverse problem, the existence of multiple local minima, and the relatively large parameter dimensionality, both methods performed well and results compare favorably with each other. Experiences gained from the surrogate model analysis are then transferred to calibrate the full highly parameterized and CPU intensive groundwater model and to explore predictive uncertainty of predictions made by that model. The methodology presented here is generally applicable to any highly parameterized and CPU-intensive environmental model, where efficient methods such as NSMC provide the only practical means for conducting predictive uncertainty analysis.
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stiffness analysis of glued connection of the timber-concrete structure
NASA Astrophysics Data System (ADS)
Daňková, Jana; Mec, Pavel; Majstríková, Tereza
2016-01-01
This paper presents results of experimental and mathematical analysis of stiffness characteristics of a composite timber-concrete structure. The composite timberconcrete structure presented herein is non-typical compared to similar types of building structures. The interaction between the timber and concrete part of the composite cross-section is not based on metal connecting elements, but it is ensured by a glued-in perforated mesh made of plywood. The paper presents results of experimental and mathematical analysis for material alternatives of the solution of the glued joint. The slip modulus values were determined experimentally. Data obtained from the experiment evaluated by means of regression analysis. Test results were also used as input data for the compilation of a 3D model of a composite structure by means of the 3D finite element model. On the basis of result evaluation, it can be stated that the stress-deformation behaviour at shear loading of this specific timber-concrete composite structure can be affected by the type of glue used. Parameters of the 3D model of both alternative of the structure represent well the behaviour of the composite structure and the model can be used for predicting design parameters of a building structure.
NASA Astrophysics Data System (ADS)
Savitri, D.
2018-01-01
This articel discusses a predator prey model with anti-predator on intermediate predator using ratio dependent functional responses. Dynamical analysis performed on the model includes determination of equilibrium point, stability and simulation. Three kinds of equilibrium points have been discussed, namely the extinction of prey point, the extinction of intermediate predator point and the extinction of predator point are exists under certain conditions. It can be shown that the result of numerical simulations are in accordance with analitical results
NASA Astrophysics Data System (ADS)
Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr
2017-11-01
The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.
Sensitivity of a numerical wave model on wind re-analysis datasets
NASA Astrophysics Data System (ADS)
Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel
2017-03-01
Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haverkamp, B.; Krone, J.; Shybetskyi, I.
2013-07-01
The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less
Separate-channel analysis of two-channel microarrays: recovering inter-spot information.
Smyth, Gordon K; Altman, Naomi S
2013-05-26
Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.
2016-01-01
Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567
NASA Technical Reports Server (NTRS)
Amundsen, R. M.; Feldhaus, W. S.; Little, A. D.; Mitchum, M. V.
1995-01-01
Electronic integration of design and analysis processes was achieved and refined at Langley Research Center (LaRC) during the development of an optical bench for a laser-based aerospace experiment. Mechanical design has been integrated with thermal, structural and optical analyses. Electronic import of the model geometry eliminates the repetitive steps of geometry input to develop each analysis model, leading to faster and more accurate analyses. Guidelines for integrated model development are given. This integrated analysis process has been built around software that was already in use by designers and analysis at LaRC. The process as currently implemented used Pro/Engineer for design, Pro/Manufacturing for fabrication, PATRAN for solid modeling, NASTRAN for structural analysis, SINDA-85 and P/Thermal for thermal analysis, and Code V for optical analysis. Currently, the only analysis model to be built manually is the Code V model; all others can be imported for the Pro/E geometry. The translator from PATRAN results to Code V optical analysis (PATCOD) was developed and tested at LaRC. Directions for use of the translator or other models are given.
Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee
2017-09-01
Study Design Cross-sectional secondary analysis of a prospective cohort study. Background The shortened version of the Disabilities of the Arm, Shoulder and Hand questionnaire (QuickDASH) is a widely used outcome measure that has been extensively evaluated using classical test theory. Rasch model analysis can identify strengths and weaknesses of rating scales and goes beyond classical test theory approaches. It uses a mathematical model to test the fit between the observed data and expected responses and converts ordinal-level scores into interval-level measurement. Objective To test the structural validity of the QuickDASH using Rasch analysis. Methods A prospective cohort study of 1030 patients with shoulder pain provided baseline data. Rasch analysis was conducted to (1) assess how the QuickDASH fits the Rasch model, (2) identify sources of misfit, and (3) explore potential solutions to these. Results There was evidence of multidimensionality and significant misfit to the Rasch model (χ 2 = 331.09, P<.001). Two items had disordered threshold responses with strong floor effects. Response bias was detected in most items for age and sex. Rescoring resulted in ordered thresholds; however, the 11-item scale still did not meet the expectations of the Rasch model. Conclusion Rasch model analysis on the QuickDASH has identified a number of problems that cannot be easily detected using traditional analyses. While revisions to the QuickDASH resulted in better fit, a "shoulder-specific" version is not advocated at present. Caution needs to be exercised when interpreting results of the QuickDASH outcome measure, as it does not meet the criteria for interval-level measurement and shows significant response bias by age and sex. J Orthop Sports Phys Ther 2017;47(9):664-672. Epub 13 Jul 2017. doi:10.2519/jospt.2017.7288.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
1981-03-01
overcome the shortcomings of this system. A phase III study develops the breakup model of the Space Shuttle clus’ter at various times into flight. The...2-1 ROCKET MODEL ..................................................... 2-5 COMBUSTION CHAMBER OPERATION ................................... 2-5...2-19 RESULTS .......................................................... 2-22 ROCKET MODEL
Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge
NASA Technical Reports Server (NTRS)
Yap, Keng C.
2010-01-01
This viewgraph presentation reviews Structural Health Monitoring Analysis for the Orbiter Wing Leading Edge. The Wing Leading Edge Impact Detection System (WLE IDS) and the Impact Analysis Process are also described to monitor WLE debris threats. The contents include: 1) Risk Management via SHM; 2) Hardware Overview; 3) Instrumentation; 4) Sensor Configuration; 5) Debris Hazard Monitoring; 6) Ascent Response Summary; 7) Response Signal; 8) Distribution of Flight Indications; 9) Probabilistic Risk Analysis (PRA); 10) Model Correlation; 11) Impact Tests; 12) Wing Leading Edge Modeling; 13) Ascent Debris PRA Results; and 14) MM/OD PRA Results.
A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network
NASA Astrophysics Data System (ADS)
Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.
2018-02-01
Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.
Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon
2017-05-31
Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are confirmed by integrated analysis of topic distribution as well as top ranked keyphrases, authors, and journals.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
The Impact of Measurement Noise in GPA Diagnostic Analysis of a Gas Turbine Engine
NASA Astrophysics Data System (ADS)
Ntantis, Efstratios L.; Li, Y. G.
2013-12-01
The performance diagnostic analysis of a gas turbine is accomplished by estimating a set of internal engine health parameters from available sensor measurements. No physical measuring instruments however can ever completely eliminate the presence of measurement uncertainties. Sensor measurements are often distorted by noise and bias leading to inaccurate estimation results. This paper explores the impact of measurement noise on Gas Turbine GPA analysis. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different levels of measurement noise. Conclusively, to improve the reliability of the diagnostic results, a statistical analysis of the data scattering caused by sensor uncertainties is made. The diagnostic tool used to deal with the statistical analysis of measurement noise impact is a model-based method utilizing a non-linear GPA.
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Linear Instability Analysis of non-uniform Bubbly Mixing layer with Two-Fluid model
NASA Astrophysics Data System (ADS)
Sharma, Subash; Chetty, Krishna; Lopez de Bertodano, Martin
We examine the inviscid instability of a non-uniform adiabatic bubbly shear layer with a Two-Fluid model. The Two-Fluid model is made well-posed with the closure relations for interfacial forces. First, a characteristic analysis is carried out to study the well posedness of the model over range of void fraction with interfacial forces for virtual mass, interfacial drag, interfacial pressure. A dispersion analysis then allow us to obtain growth rate and wavelength. Then, the well-posed two-fluid model is solved using CFD to validate the results obtained with the linear stability analysis. The effect of the void fraction and the distribution profile on stability is analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MACKEY, T.C.
M&D Professional Services, Inc. (M&D) is under subcontract to Pacific Northwest National Laboratories (PNNL) to perform seismic analysis of the Hanford Site Double-Shell Tanks (DSTs) in support of a project entitled ''Double-Shell Tank (DSV Integrity Project-DST Thermal and Seismic Analyses)''. The overall scope of the project is to complete an up-to-date comprehensive analysis of record of the DST System at Hanford in support of Tri-Party Agreement Milestone M-48-14. The work described herein was performed in support of the seismic analysis of the DSTs. The thermal and operating loads analysis of the DSTs is documented in Rinker et al. (2004). Themore » overall seismic analysis of the DSTs is being performed with the general-purpose finite element code ANSYS. The overall model used for the seismic analysis of the DSTs includes the DST structure, the contained waste, and the surrounding soil. The seismic analysis of the DSTs must address the fluid-structure interaction behavior and sloshing response of the primary tank and contained liquid. ANSYS has demonstrated capabilities for structural analysis, but the capabilities and limitations of ANSYS to perform fluid-structure interaction are less well understood. The purpose of this study is to demonstrate the capabilities and investigate the limitations of ANSYS for performing a fluid-structure interaction analysis of the primary tank and contained waste. To this end, the ANSYS solutions are benchmarked against theoretical solutions appearing in BNL 1995, when such theoretical solutions exist. When theoretical solutions were not available, comparisons were made to theoretical solutions of similar problems and to the results from Dytran simulations. The capabilities and limitations of the finite element code Dytran for performing a fluid-structure interaction analysis of the primary tank and contained waste were explored in a parallel investigation (Abatt 2006). In conjunction with the results of the global ANSYS analysis reported in Carpenter et al. (2006), the results of the two investigations will be compared to help determine if a more refined sub-model of the primary tank is necessary to capture the important fluid-structure interaction effects in the tank and if so, how to best utilize a refined sub-model of the primary tank. Both rigid tank and flexible tank configurations were analyzed with ANSYS. The response parameters of interest are total hydrodynamic reaction forces, impulsive and convective mode frequencies, waste pressures, and slosh heights. To a limited extent: tank stresses are also reported. The results of this study demonstrate that the ANSYS model has the capability to adequately predict global responses such as frequencies and overall reaction forces. Thus, the model is suitable for predicting the global response of the tank and contained waste. On the other hand, while the ANSYS model is capable of adequately predicting waste pressures and primary tank stresses in a large portion of the waste tank, the model does not accurately capture the convective behavior of the waste near the free surface, nor did the model give accurate predictions of slosh heights. Based on the ability of the ANSYS benchmark model to accurately predict frequencies and global reaction forces and on the results presented in Abatt, et al. (2006), the global ANSYS model described in Carpenter et al. (2006) is sufficient for the seismic evaluation of all tank components except for local areas of the primary tank. Due to the limitations of the ANSYS model in predicting the convective response of the waste, the evaluation of primary tank stresses near the waste free surface should be supplemented by results from an ANSYS sub-model of the primary tank that incorporates pressures from theoretical solutions or from Dytran solutions. However, the primary tank is expected to have low demand to capacity ratios in the upper wall. Moreover, due to the less than desired mesh resolution in the primary tank knuckle of the global ANSYS model, the evaluation of the primary tank stresses in the lower knuckle should be supplemented by results from a more refined ANSYS sub-model of the primary tank that incorporates pressures from theoretical solutions or from Dytran solutions.« less
Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.
Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W
2016-08-18
A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (<7 %) between estimation and ground truth. By applying the topic model-based purification to mass spectrometric data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed that incorporation of scan-level features have the potential to lead to more accurate purification results by alleviating the loss in information as a result of integrating peaks. We believe cancer biomarker discovery studies that use mass spectrometric analysis of human biospecimens can greatly benefit from topic model-based purification of the data prior to statistical and pathway analyses.
Passage Key Inlet, Florida; CMS Modeling and Borrow Site Impact Analysis
2016-06-01
Impact Analysis by Kelly R. Legault and Sirisha Rayaprolu PURPOSE: This Coastal and Hydraulics Engineering Technical Note (CHETN) describes the...use of a nested Coastal Modeling System (CMS) model for Passage Key Inlet, which is one of the connections between the Gulf of Mexico and Tampa Bay...driven sediment transport at Passage Key Inlet. This analysis resulted in issuing a new Florida Department of Environmental Protection (FDEP) permit to
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
New Insight into Combined Model and Revised Model for RTD Curves in a Multi-strand Tundish
NASA Astrophysics Data System (ADS)
Lei, Hong
2015-12-01
The analysis for the residence time distribution (RTD) curve is one of the important experimental technologies to optimize the tundish design. But there are some issues about RTD analysis model. Firstly, the combined (or mixed) model and the revised model give different analysis results for the same RTD curve. Secondly, different upper limits of integral in the numerator for the mean residence time give different results for the same RTD curve. Thirdly, the negative dead volume fraction sometimes appears at the outer strand of the multi-strand tundish. In order to solve the above problems, it is necessary to have a deep insight into the RTD curve and to propose a reasonable method to analyze the RTD curve. The results show that (1) the revised model is not appropriate to treat with the RTD curve; (2) the conception of the visual single-strand tundish and the combined model with the dimensionless time at the cut-off point are applied to estimate the flow characteristics in the multi-strand tundish; and that (3) the mean residence time at each exit is the key parameter to estimate the similarity of fluid flow among strands.
A constitutive model and numerical simulation of sintering processes at macroscopic level
NASA Astrophysics Data System (ADS)
Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy
2018-01-01
This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.
Comparative Modelling of the Spectra of Cool Giants
NASA Technical Reports Server (NTRS)
Lebzelter, T.; Heiter, U.; Abia, C.; Eriksson, K.; Ireland, M.; Neilson, H.; Nowotny, W; Maldonado, J; Merle, T.; Peterson, R.;
2012-01-01
Our ability to extract information from the spectra of stars depends on reliable models of stellar atmospheres and appropriate techniques for spectral synthesis. Various model codes and strategies for the analysis of stellar spectra are available today. Aims. We aim to compare the results of deriving stellar parameters using different atmosphere models and different analysis strategies. The focus is set on high-resolution spectroscopy of cool giant stars. Methods. Spectra representing four cool giant stars were made available to various groups and individuals working in the area of spectral synthesis, asking them to derive stellar parameters from the data provided. The results were discussed at a workshop in Vienna in 2010. Most of the major codes currently used in the astronomical community for analyses of stellar spectra were included in this experiment. Results. We present the results from the different groups, as well as an additional experiment comparing the synthetic spectra produced by various codes for a given set of stellar parameters. Similarities and differences of the results are discussed. Conclusions. Several valid approaches to analyze a given spectrum of a star result in quite a wide range of solutions. The main causes for the differences in parameters derived by different groups seem to lie in the physical input data and in the details of the analysis method. This clearly shows how far from a definitive abundance analysis we still are.
NASA Technical Reports Server (NTRS)
Sun, C. T.; Yoon, K. J.
1990-01-01
A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.
Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model
NASA Technical Reports Server (NTRS)
Sun, C. T.; Yoon, K. J.
1992-01-01
A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.
NASA Technical Reports Server (NTRS)
Hackert, Eric C.; Busalacchi, Antonio J.
1997-01-01
The goal of this paper is to compare TOPEX/Posaidon (T/P) sea level with sea level results from linear ocean model experiments forced by several different wind products for the tropical Pacific. During the period of this study (October 1992 - October 1995), available wind products include satellite winds from the ERS-1 scatterometer product of [HALP 97] and the passive microwave analysis of SSMI winds produced using the variational analysis method (VAM) of [ATLA 91]. In addition, atmospheric GCM winds from the NCEP reanalysis [KALN 96], ECMWF analysis [ECMW94], and the Goddard EOS-1 (GEOS-1) reanalysis experiment [SCHU 93] are available for comparison. The observed ship wind analysis of FSU [STRI 92] is also included in this study. The linear model of [CANE 84] is used as a transfer function to test the quality of each of these wind products for the tropical Pacific. The various wind products are judged by comparing the wind-forced model sea level results against the T/P sea level anomalies. Correlation and RMS difference maps show how well each wind product does in reproducing the T/P sea level signal. These results are summarized in a table showing area average correlations and RMS differences. The large-scale low-frequency temporal signal is reproduced by all of the wind products, However, significant differences exist in both amplitude and phase on regional scales. In general, the model results forced by satellite winds do a better job reproducing the T/P signal (i.e. have a higher average correlation and lower RMS difference) than the results forced by atmospheric model winds.
Developing a new solar radiation estimation model based on Buckingham theorem
NASA Astrophysics Data System (ADS)
Ekici, Can; Teke, Ismail
2018-06-01
While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Peter E; Wang, Weile; Law, Beverly E.
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne
A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
RVA: A Plugin for ParaView 3.14
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-04
RVA is a plugin developed for the 64-bit Windows version of the ParaView 3.14 visualization package. RVA is designed to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed onmore » enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less
NASA Technical Reports Server (NTRS)
Hairr, John W.; Dorris, William J.; Ingram, J. Edward; Shah, Bharat M.
1993-01-01
Interactive Stiffened Panel Analysis (ISPAN) modules, written in FORTRAN, were developed to provide an easy to use tool for creating finite element models of composite material stiffened panels. The modules allow the user to interactively construct, solve and post-process finite element models of four general types of structural panel configurations using only the panel dimensions and properties as input data. Linear, buckling and post-buckling solution capability is provided. This interactive input allows rapid model generation and solution by non finite element users. The results of a parametric study of a blade stiffened panel are presented to demonstrate the usefulness of the ISPAN modules. Also, a non-linear analysis of a test panel was conducted and the results compared to measured data and previous correlation analysis.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
NASA Astrophysics Data System (ADS)
Darma Tarigan, Suria
2016-01-01
Flooding is caused by excessive rainfall flowing downstream as cumulative surface runoff. Flooding event is a result of complex interaction of natural system components such as rainfall events, land use, soil, topography and channel characteristics. Modeling flooding event as a result of interaction of those components is a central theme in watershed management. The model is usually used to test performance of various management practices in flood mitigation. There are various types of management practices for flood mitigation including vegetative and structural management practices. Existing hydrological model such as SWAT and HEC-HMS models have limitation to accommodate discrete management practices such as infiltration well, small farm reservoir, silt pits in its analysis due to the lumped structure of these models. Aim of this research is to use raster spatial analysis functions of Geo-Information System (RGIS-HM) to model flooding event in Ciliwung watershed and to simulate impact of discrete management practices on surface runoff reduction. The model was validated using flooding data event of Ciliwung watershed on 29 January 2004. The hourly hydrograph data and rainfall data were available during period of model validation. The model validation provided good result with Nash-Suthcliff efficiency of 0.8. We also compared the RGIS-HM with Netlogo Hydrological Model (NL-HM). The RGIS-HM has similar capability with NL-HM in simulating discrete management practices in watershed scale.
Thermal analysis of combinatorial solid geometry models using SINDA
NASA Technical Reports Server (NTRS)
Gerencser, Diane; Radke, George; Introne, Rob; Klosterman, John; Miklosovic, Dave
1993-01-01
Algorithms have been developed using Monte Carlo techniques to determine the thermal network parameters necessary to perform a finite difference analysis on Combinatorial Solid Geometry (CSG) models. Orbital and laser fluxes as well as internal heat generation are modeled to facilitate satellite modeling. The results of the thermal calculations are used to model the infrared (IR) images of targets and assess target vulnerability. Sample analyses and validation are presented which demonstrate code products.
User's Manual and Final Report for Hot-SMAC GUI Development
NASA Technical Reports Server (NTRS)
Yarrington, Phil
2001-01-01
A new software package called Higher Order Theory-Structural/Micro Analysis Code (HOT-SMAC) has been developed as an effective alternative to the finite element approach for Functionally Graded Material (FGM) modeling. HOT-SMAC is a self-contained package including pre- and post-processing through an intuitive graphical user interface, along with the well-established Higher Order Theory for Functionally Graded Materials (HOTFGM) thermomechanical analysis engine. This document represents a Getting Started/User's Manual for HOT-SMAC and a final report for its development. First, the features of the software are presented in a simple step-by-step example where a HOT-SMAC model representing a functionally graded material is created, mechanical and thermal boundary conditions are applied, the model is analyzed and results are reviewed. In a second step-by-step example, a HOT-SMAC model of an actively cooled metallic channel with ceramic thermal barrier coating is built and analyzed. HOT-SMAC results from this model are compared to recently published results (NASA/TM-2001-210702) for two grid densities. Finally, a prototype integration of HOTSMAC with the commercially available HyperSizer(R) structural analysis and sizing software is presented. In this integration, local strain results from HyperSizer's structural analysis are fed to a detailed HOT-SMAC model of the flange-to-facesheet bond region of a stiffened panel. HOT-SMAC is then used to determine the peak shear and peel (normal) stresses between the facesheet and bonded flange of the panel and determine the "free edge" effects.
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
A dc model for power switching transistors suitable for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Wilson, P. M.; George, R. T., Jr.; Owen, H. A.; Wilson, T. G.
1979-01-01
A model for bipolar junction power switching transistors whose parameters can be readily obtained by the circuit design engineer, and which can be conveniently incorporated into standard computer-based circuit analysis programs is presented. This formulation results from measurements which may be made with standard laboratory equipment. Measurement procedures, as well as a comparison between actual and computed results, are presented.
Steven F. Railsback; Bret C. Harvey; Jason L. White
2015-01-01
We address the question of spatial extent: how model results depend on the amount and type of space represented. For models of how stream habitat affects fish populations, how do the amount and characteristics of habitat represented in the model affect its results and how well do those results represent the whole stream? Our analysis used inSalmo, an individual-based...
Sexing California gulls using morphometrics and discriminant function analysis
Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.
2010-01-01
A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.
Visual modeling in an analysis of multidimensional data
NASA Astrophysics Data System (ADS)
Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.
2018-01-01
The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.
Dynamics analysis of the fast-slow hydro-turbine governing system with different time-scale coupling
NASA Astrophysics Data System (ADS)
Zhang, Hao; Chen, Diyi; Wu, Changzhi; Wang, Xiangyu
2018-01-01
Multi-time scales modeling of hydro-turbine governing system is crucial in precise modeling of hydropower plant and provides support for the stability analysis of the system. Considering the inertia and response time of the hydraulic servo system, the hydro-turbine governing system is transformed into the fast-slow hydro-turbine governing system. The effects of the time-scale on the dynamical behavior of the system are analyzed and the fast-slow dynamical behaviors of the system are investigated with different time-scale. Furthermore, the theoretical analysis of the stable regions is presented. The influences of the time-scale on the stable region are analyzed by simulation. The simulation results prove the correctness of the theoretical analysis. More importantly, the methods and results of this paper provide a perspective to multi-time scales modeling of hydro-turbine governing system and contribute to the optimization analysis and control of the system.
Decision curve analysis: a novel method for evaluating prediction models.
Vickers, Andrew J; Elkin, Elena B
2006-01-01
Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.
Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion
NASA Technical Reports Server (NTRS)
Demchak, L.; Harcrow, H.
1976-01-01
The results of a study to analyze data and document dynamic program highlights of the Skylab Program are presented. Included are structural model sources, illustration of the analytical models, utilization of models and the resultant derived data, data supplied to organization and subsequent utilization, and specifications of model cycles.
[Design of a conceptual model on the transference of public health research results in Honduras].
Macías-Chapula, César A
2012-01-01
To design a conceptual model on the transference of public health research results at the local, context level. Using systems thinking concepts, a soft systems approach (SSM) was used to analyse and solve what was perceived as a problem situation related to the transference of research results within Honduras public health system. A bibliometric analysis was also conducted to enrich the problem situation. Six root definitions were defined and modeled as relevant to the expressed problem situation. This led to the development of the conceptual model. The model obtained identified four levels of resolution as derived from the human activities involved in the transference of research results: 1) those of the researchers; 2) the information/documentation professionals; 3) health staff; and 4) the population/society. These actors/ clients and their activities were essential to the functioning of the model since they represent what the model is and does. SSM helped to design the conceptual model. The bibliometric analysis was relevant to construct the rich image of the problem situation.
Development of a comprehensive urban commodity/freight movement model for Texas.
DOT National Transportation Integrated Search
2006-01-01
The Texas Department of Transportation (TxDOT) developed the Texas Statewide Analysis Model (SAM) to provide analysis and : forecasting capabilities of passenger and commodity/freight movements in Texas. The SAM provides data and results at a level :...
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Modeling and dynamic environment analysis technology for spacecraft
NASA Astrophysics Data System (ADS)
Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei
Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
Mach 14 Flow Restrictor Thermal Stress Analysis
1984-08-01
tranfer analysis, thermal stress analysis, results translation from ABAQUS to PATRAN-G, and the method used to determine the heat transfer film...G, model translation into ABAQUS format, transient heat transfer analysis and thermal stress analysis input decks, results translation from ABAQUS ...TRANSLATION FROM PATRAN-G TO ABAQUS 3 ABAQUS CONSIDERATIONS 8 MATERIAL PROPERTIES OF COLUMBIUM C-103 10 USER SUBROUTINE FILM 11 TRANSIENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, J.; Mazumder, J.
1996-12-31
Networking three fields of welding--thermal, microstructure, and stress--was attempted and produced a reliable model using a numerical method with the finite element analysis technique. Model prediction was compared with experimental data in order to validate the model. The effects of welding process parameters on these welding fields were analyzed and reported. The effort to correlate the residual stress and solidification was initiated, with some valuable results. The solidification process was simulated using the formulation based on the Hunt-Trivedi model. Based on the temperature history, solidification speed and primary dendrite arm spacing were predicted at given nodes of interest. Results showmore » that the variation during solidification is usually within an order of magnitude. The temperature gradient was generally in the range of 10{sup 4}--10{sup 5} K/m for the given welding conditions (welding power = 6 kW and welding speed = 3.3867 to 7.62 mm/sec), while solidification speed appeared to slow down from an order of 10{sup {minus}1} to 10{sup {minus}2} m/sec during solidification. SEM images revealed that the primary dendrite arm spacing (PDAS) fell in the range of 10{sup 1}--10{sup 2} {micro}m. For grain growth at the heat affected zone (HAZ), Ashby`s model was employed. The prediction was in agreement with experimental results. For the residual stress calculation, the same mesh generation used in the heat transfer analysis was applied to make the simulation consistent. The analysis consisted of a transient heat analysis followed by a thermal stress analysis. An experimentally measured strain history was compared with the simulated result. The relationship between microstructure and the stress/strain field of welding was also obtained. 64 refs., 18 figs., 9 tabs.« less
Seo, Jeong-Woo; Kang, Dong-Won; Kim, Ju-Young; Yang, Seung-Tae; Kim, Dae-Hyeok; Choi, Jin-Seung; Tack, Gye-Rae
2014-01-01
In this study, the accuracy of the inputs required for finite element analysis, which is mainly used for the biomechanical analysis of bones, was improved. To ensure a muscle force and joint contact force similar to the actual values, a musculoskeletal model that was based on the actual gait experiment was used. Gait data were obtained from a healthy male adult aged 29 who had no history of musculoskeletal disease and walked normally (171 cm height and 72 kg weight), and were used as inputs for the musculoskeletal model simulation to determine the muscle force and joint contact force. Among the phases of gait, which is the most common activity in daily life, the stance phase is the most affected by the load. The results data were extracted from five events in the stance phase: heel contact (ST1), loading response (ST2), early mid-stance (ST2), late mid-stance (ST4), and terminal stance (ST5). The results were used as the inputs for the finite element model that was formed using 1.5mm intervals computed tomography (CT) images and the maximum Von-Mises stress and the maximum Von-Mises strain of the right femur were examined. The maximum stress and strain were lowest at the ST4. The maximum values for the femur occurred in the medial part and then in the lateral part after the mid-stance. In this study, the results of the musculoskeletal model simulation using the inverse-dynamic analysis were utilized to improve the accuracy of the inputs, which affected the finite element analysis results, and the possibility of the bone-specific analysis according to the lapse of time was examined.
NASA Astrophysics Data System (ADS)
Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh
2018-07-01
The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.
Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A
2016-01-01
Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.
Crustal Structure Beneath Taiwan Using Frequency-band Inversion of Receiver Function Waveforms
NASA Astrophysics Data System (ADS)
Tomfohrde, D. A.; Nowack, R. L.
Receiver function analysis is used to determine local crustal structure beneath Taiwan. We have performed preliminary data processing and polarization analysis for the selection of stations and events and to increase overall data quality. Receiver function analysis is then applied to data from the Taiwan Seismic Network to obtain radial and transverse receiver functions. Due to the limited azimuthal coverage, only the radial receiver functions are analyzed in terms of horizontally layered crustal structure for each station. In order to improve convergence of the receiver function inversion, frequency-band inversion (FBI) is implemented, in which an iterative inversion procedure with sequentially higher low-pass corner frequencies is used to stabilize the waveform inversion. Frequency-band inversion is applied to receiver functions at six stations of the Taiwan Seismic Network. Initial 20-layer crustal models are inverted for using prior tomographic results for the initial models. The resulting 20-1ayer models are then simplified to 4 to 5 layer models and input into an alternating depth and velocity frequency-band inversion. For the six stations investigated, the resulting simplified models provide an average estimate of 38 km for the Moho thickness surrounding the Central Range of Taiwan. Also, the individual station estimates compare well with the recent tomographic model of and the refraction results of Rau and Wu (1995) and the refraction results of Ma and Song (1997).
Discrete time modeling and stability analysis of TCP Vegas
NASA Astrophysics Data System (ADS)
You, Byungyong; Koo, Kyungmo; Lee, Jin S.
2007-12-01
This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.
NASA Technical Reports Server (NTRS)
Defelice, David M.; Aydelott, John C.
1987-01-01
The resupply of the cryogenic propellants is an enabling technology for spacebased orbit transfer vehicles. As part of the NASA Lewis ongoing efforts in microgravity fluid management, thermodynamic analysis and subscale modeling techniques were developed to support an on-orbit test bed for cryogenic fluid management technologies. Analytical results have shown that subscale experimental modeling of liquid resupply can be used to validate analytical models when the appropriate target temperature is selected to relate the model to its prototype system. Further analyses were used to develop a thermodynamic model of the tank chilldown process which is required prior to the no-vent fill operation. These efforts were incorporated into two FORTRAN programs which were used to present preliminary analyticl results.
Finite Element Analysis of the Microisolation Valve
NASA Technical Reports Server (NTRS)
Man, K.; Mueller, J.; Forgrave, J.
1998-01-01
Agenda: Design and Use of the Microisolation Valve; Geometry of the Microisolation Valve; FEA Model Objectives; Results of the 10 and 50 microns Thick Wall Models; Results of the Thermally-Induced Stresses.
NASA Astrophysics Data System (ADS)
Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd
2018-04-01
A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
NASA Technical Reports Server (NTRS)
Woods-Vedeler, Jessica A.; Rombado, Gabriel
1997-01-01
The purpose of this paper is to provide final results of a pointing stability analysis for external payload attachment sites (PAS) on the International Space Station (ISS). As a specific example, the pointing stability requirement of the SAGE III atmospheric science instrument was examined in this paper. The instrument requires 10 arcsec stability over 2 second periods. SAGE 3 will be mounted on the ISS starboard side at the lower, outboard FIAS. In this engineering analysis, an open-loop DAC-3 finite element model of ISS was used by the Microgravity Group at Johnson Space Flight Center to generate transient responses at PAS to a limited number of disturbances. The model included dynamics up to 50 Hz. Disturbance models considered included operation of the solar array rotary joints, thermal radiator rotary joints, and control moment gyros. Responses were filtered to model the anticipated vibration attenuation effects of active control systems on the solar and thermal radiator rotary joints. A pointing stability analysis was conducted by double integrating acceleration transient over a 2 second period. Results of the analysis are tabulated for ISS X, Y, and Z Axis rotations. These results indicate that the largest excursions in rotation during pointing occurred due to rapid slewing of the thermal radiator. Even without attenuation at the rotary joints, the resulting pointing error was limited to less than 1.6 arcsec. With vibration control at the joints, to a maximum 0.5 arcsec over a 2 second period. Based on this current level of model definition, it was concluded that between 0 - 50 Hz, the pointing stability requirement for SAGE 3 will not be exceeded by the disturbances evaluated in this study.
HESS Opinions: Repeatable research: what hydrologistscan learn from the Duke cancer research scandal
Fienen, Michael; Bakker, Mark
2016-01-01
In the past decade, difficulties encountered in reproducing the results of a cancer study at Duke University resulted in a scandal and an investigation which concluded that tools used for data management, analysis, and modeling were inappropriate for the documentation of the study, let alone the reproduction of the results. New protocols were developed which require that data analysis and modeling be carried out with scripts that can be used to reproduce the results and are a record of all decisions and interpretations made during an analysis or a modeling effort. In the hydrological sciences, we face similar challenges and need to develop similar standards for transparency and repeatability of results. A promising route is to start making use of open-source languages (such as R and Python) to write scripts and to use collaborative coding environments (such as Git) to share our codes for inspection and use by the hydrological community. An important side-benefit to adopting such protocols is consistency and efficiency among collaborators.
Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing
2018-02-01
In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
NASA Astrophysics Data System (ADS)
Alonso-Contes, C.; Gerber, S.; Bliznyuk, N.; Duerr, I.
2017-12-01
Wetlands contribute approximately 20 to 40 % to global sources of methane emissions. We build a Methane model for tropical and subtropical forests, that allows inundated conditions, following the approaches used in more complex global biogeochemical emission models (LPJWhyMe and CLM4Me). The model was designed to replace model formulations with field and remotely sensed collected data for 2 essential drivers: plant productivity and hydrology. This allows us to directly focus on the central processes of methane production, consumption and transport. One of our long term goals is to make the model available to a scientists interested in including methane modeling in their location of study. Sensitivity analysis results help in focusing field data collection efforts. Here, we present results from a pilot global sensitivity analysis of the model order to determine which parameters and processes contribute most to the model's uncertainty of methane emissions. Results show that parameters related to water table behavior, carbon input (in form of plant productivity) and rooting depth affect simulated methane emissions the most. Current efforts include to perform the sensitivity analysis again on methane emissions outputs from an updated model that incorporates a soil heat flux routine and to determine the extent by which the soil temperature parameters affect CH4 emissions. Currently we are conducting field collection of data during Summer 2017 for comparison among 3 different landscapes located in the Ordway-Swisher Biological Station in Melrose, FL. We are collecting soil moisture and CH4 emission data from 4 different wetland types. Having data from 4 wetland types allows for calibration of the model to diverse soil, water and vegetation characteristics.
Is job a viable unit of analysis? A multilevel analysis of demand-control-support models.
Morrison, David; Payne, Roy L; Wall, Toby D
2003-07-01
The literature has ignored the fact that the demand-control (DC) and demand-control-support (DCS) models of stress are about jobs and not individuals' perceptions of their jobs. Using multilevel modeling, the authors report results of individual- and job-level analyses from a study of over 6,700 people in 81 different jobs. Support for additive versions of the models came when individuals were the unit of analysis. DC and DCS models are only helpful for understanding the effects of individual perceptions of jobs and their relationship to psychological states. When job perceptions are aggregated and their relationship to the collective experience of jobholders is assessed, the models prove of little value. Role set may be a better unit of analysis.
NASA Astrophysics Data System (ADS)
Weatherill, Graeme; Burton, Paul W.
2010-09-01
The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.
NASTRAN analysis for the Airmass Sunburst model 'C' Ultralight Aircraft
NASA Technical Reports Server (NTRS)
Verbestel, John; Smith, Howard W.
1993-01-01
The purpose of this project was to create a three dimensional NASTRAN model of the Airmass Sunburst Ultralight comparable to one made for finite element analysis. A two dimensional sample problem will be calculated by hand and by NASTRAN to make sure that NASTRAN finds similar results. A three dimensional model, similar to the one analyzed by the finite element program, will be run on NASTRAN. A comparison will be done between the NASTRAN results and the finite element program results. This study will deal mainly with the aerodynamic loads on the wing and surrounding support structure at an attack angle of 10 degrees.
Complex dynamics of an SEIR epidemic model with saturated incidence rate and treatment
NASA Astrophysics Data System (ADS)
Khan, Muhammad Altaf; Khan, Yasir; Islam, Saeed
2018-03-01
In this paper, we describe the dynamics of an SEIR epidemic model with saturated incidence, treatment function, and optimal control. Rigorous mathematical results have been established for the model. The stability analysis of the model is investigated and found that the model is locally asymptotically stable when R0 < 1. The model is locally as well as globally asymptotically stable at endemic equilibrium when R0 > 1. The proposed model may possess a backward bifurcation. The optimal control problem is designed and obtained their necessary results. Numerical results have been presented for justification of theoretical results.
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Development of an integrated aeroservoelastic analysis program and correlation with test data
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Brenner, M. J.; Voelker, L. S.
1991-01-01
The details and results are presented of the general-purpose finite element STructural Analysis RoutineS (STARS) to perform a complete linear aeroelastic and aeroservoelastic analysis. The earlier version of the STARS computer program enabled effective finite element modeling as well as static, vibration, buckling, and dynamic response of damped and undamped systems, including those with pre-stressed and spinning structures. Additions to the STARS program include aeroelastic modeling for flutter and divergence solutions, and hybrid control system augmentation for aeroservoelastic analysis. Numerical results of the X-29A aircraft pertaining to vibration, flutter-divergence, and open- and closed-loop aeroservoelastic controls analysis are compared to ground vibration, wind-tunnel, and flight-test results. The open- and closed-loop aeroservoelastic control analyses are based on a hybrid formulation representing the interaction of structural, aerodynamic, and flight-control dynamics.
Modeling vertebrate diversity in Oregon using satellite imagery
NASA Astrophysics Data System (ADS)
Cablk, Mary Elizabeth
Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.
Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel
NASA Astrophysics Data System (ADS)
Edelmann, Paul G.
There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.
NASA Astrophysics Data System (ADS)
Suhartono, Lee, Muhammad Hisyam; Rezeki, Sri
2017-05-01
Intervention analysis is a statistical model in the group of time series analysis which is widely used to describe the effect of an intervention caused by external or internal factors. An example of external factors that often occurs in Indonesia is a disaster, both natural or man-made disaster. The main purpose of this paper is to provide the results of theoretical studies on identification step for determining the order of multi inputs intervention analysis for evaluating the magnitude and duration of the impact of interventions on time series data. The theoretical result showed that the standardized residuals could be used properly as response function for determining the order of multi inputs intervention model. Then, these results are applied for evaluating the impact of a disaster on a real case in Indonesia, i.e. the magnitude and duration of the impact of the Lapindo mud on the volume of vehicles on the highway. Moreover, the empirical results showed that the multi inputs intervention model can describe and explain accurately the magnitude and duration of the impact of disasters on a time series data.
Numerical analysis and experimental research of the rubber boot of the joint drive vehicle
NASA Astrophysics Data System (ADS)
Ziobro, Jan
2016-04-01
The article presents many numerical studies and experimental research of the drive rubber boot of the joint drive vehicle. Performance requirements have been discussed and the required coefficients of the mathematical model for numerical simulation have been determined. The behavior of living in MSC.MARC environment was examined. In the analysis the following have been used: hyperplastic two-parameter model of the Mooney-Rivlin material, large displacements procedure, safe contact condition, friction on the sides of the boots. 3D numerical model of the joint bootwas analyzed under influence of the forces: tensile, compressive, centrifugal and angular. Numerous results of studies have been presented. An appropriate test stand was built and comparison of the results of the numerical analysis and the results of experimental studies was made. Numerous requests and recommendations for utilitarian character have been presented.
Cognitive task analysis: harmonizing tasks to human capacities.
Neerincx, M A; Griffioen, E
1996-04-01
This paper presents the development of a cognitive task analysis that assesses the task load of jobs and provides indicators for the redesign of jobs. General principles of human task performance were selected and, subsequently, integrated into current task modelling techniques. The resulting cognitive task analysis centres around four aspects of task load: the number of actions in a period, the ratio between knowledge- and rule-based actions, lengthy uninterrupted actions, and momentary overloading. The method consists of three stages: (1) construction of a hierarchical task model, (2) a time-line analysis and task load assessment, and (3), if necessary, adjustment of the task model. An application of the cognitive task analysis in railway traffic control showed its benefits over the 'old' task load analysis of the Netherlands Railways. It provided a provisional standard for traffic control jobs, conveyed two load risks -- momentary overloading and underloading -- and resulted in proposals to satisfy the standard and to diminish the two load risk.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2006-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
New Results in Software Model Checking and Analysis
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.
2010-01-01
This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.
DOT National Transportation Integrated Search
2004-02-17
This document presents the results of the analysis of baseline, or "pre-enhancement," data describing the operation of the existing 511 telephone traveler information system operated by the Arizona Department of Transportation (ADOT). The model deplo...
Floquet stability analysis of the longitudinal dynamics of two hovering model insects
Wu, Jiang Hao; Sun, Mao
2012-01-01
Because of the periodically varying aerodynamic and inertial forces of the flapping wings, a hovering or constant-speed flying insect is a cyclically forcing system, and, generally, the flight is not in a fixed-point equilibrium, but in a cyclic-motion equilibrium. Current stability theory of insect flight is based on the averaged model and treats the flight as a fixed-point equilibrium. In the present study, we treated the flight as a cyclic-motion equilibrium and used the Floquet theory to analyse the longitudinal stability of insect flight. Two hovering model insects were considered—a dronefly and a hawkmoth. The former had relatively high wingbeat frequency and small wing-mass to body-mass ratio, and hence very small amplitude of body oscillation; while the latter had relatively low wingbeat frequency and large wing-mass to body-mass ratio, and hence relatively large amplitude of body oscillation. For comparison, analysis using the averaged-model theory (fixed-point stability analysis) was also made. Results of both the cyclic-motion stability analysis and the fixed-point stability analysis were tested by numerical simulation using complete equations of motion coupled with the Navier–Stokes equations. The Floquet theory (cyclic-motion stability analysis) agreed well with the simulation for both the model dronefly and the model hawkmoth; but the averaged-model theory gave good results only for the dronefly. Thus, for an insect with relatively large body oscillation at wingbeat frequency, cyclic-motion stability analysis is required, and for their control analysis, the existing well-developed control theories for systems of fixed-point equilibrium are no longer applicable and new methods that take the cyclic variation of the flight dynamics into account are needed. PMID:22491980
NASA Astrophysics Data System (ADS)
Everaers, Ralf
2012-08-01
We show that the front factor appearing in the shear modulus of a phantom network, Gph=(1-2/f)(ρkBT)/Ns, also controls the ratio of the strand length, Ns, and the number of monomers per Kuhn length of the primitive paths, NphPPKuhn, characterizing the average network conformation. In particular, NphPPKuhn=Ns/(1-2/f) and Gph=(ρkBT)/NphPPKuhn. Neglecting the difference between cross-links and slip-links, these results can be transferred to entangled systems and the interpretation of primitive path analysis data. In agreement with the tube model, the analogy to phantom networks suggest that the rheological entanglement length, Nerheo=(ρkBT)/Ge, should equal NePPKuhn. Assuming binary entanglements with f=4 functional junctions, we expect that Nerheo should be twice as large as the topological entanglement length, Netopo. These results are in good agreement with reported primitive path analysis results for model systems and a wide range of polymeric materials. Implications for tube and slip-link models are discussed.
Testing and Analysis of Sensor Ports
NASA Technical Reports Server (NTRS)
Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.
2016-01-01
This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
Analysis of out-of-plane thermal microactuators
NASA Astrophysics Data System (ADS)
Atre, Amarendra
2006-02-01
Out-of-plane thermal microactuators find applications in optical switches to motivate micromirrors. Accurate analysis of such actuators is beneficial for improving existing designs and constructing more energy efficient actuators. However, the analysis is complicated by the nonlinear deformation of the thermal actuators along with temperature-dependent properties of polysilicon. This paper describes the development, modeling issues and results of a three-dimensional multiphysics nonlinear finite element model of surface micromachined out-of-plane thermal actuators. The model includes conductive and convective cooling effects and takes into account the effect of variable air gap on the response of the actuator. The model is implemented to investigate the characteristics of two diverse MUMPs fabricated out-of-plane thermal actuators. Reasonable agreement is observed between simulated and measured results for the model that considers the influence of air gap on actuator response. The usefulness of the model is demonstrated by implementing it to observe the effect of actuator geometry variation on steady-state deflection response.
Semiparametric mixed-effects analysis of PK/PD models using differential equations.
Wang, Yi; Eskridge, Kent M; Zhang, Shunpu
2008-08-01
Motivated by the use of semiparametric nonlinear mixed-effects modeling on longitudinal data, we develop a new semiparametric modeling approach to address potential structural model misspecification for population pharmacokinetic/pharmacodynamic (PK/PD) analysis. Specifically, we use a set of ordinary differential equations (ODEs) with form dx/dt = A(t)x + B(t) where B(t) is a nonparametric function that is estimated using penalized splines. The inclusion of a nonparametric function in the ODEs makes identification of structural model misspecification feasible by quantifying the model uncertainty and provides flexibility for accommodating possible structural model deficiencies. The resulting model will be implemented in a nonlinear mixed-effects modeling setup for population analysis. We illustrate the method with an application to cefamandole data and evaluate its performance through simulations.
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
An Integrated Solution for Performing Thermo-fluid Conjugate Analysis
NASA Technical Reports Server (NTRS)
Kornberg, Oren
2009-01-01
A method has been developed which integrates a fluid flow analyzer and a thermal analyzer to produce both steady state and transient results of 1-D, 2-D, and 3-D analysis models. The Generalized Fluid System Simulation Program (GFSSP) is a one dimensional, general purpose fluid analysis code which computes pressures and flow distributions in complex fluid networks. The MSC Systems Improved Numerical Differencing Analyzer (MSC.SINDA) is a one dimensional general purpose thermal analyzer that solves network representations of thermal systems. Both GFSSP and MSC.SINDA have graphical user interfaces which are used to build the respective model and prepare it for analysis. The SINDA/GFSSP Conjugate Integrator (SGCI) is a formbase graphical integration program used to set input parameters for the conjugate analyses and run the models. The contents of this paper describes SGCI and its thermo-fluids conjugate analysis techniques and capabilities by presenting results from some example models including the cryogenic chill down of a copper pipe, a bar between two walls in a fluid stream, and a solid plate creating a phase change in a flowing fluid.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
NASA Astrophysics Data System (ADS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; McCleary, S. L.
1991-05-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
NASA Technical Reports Server (NTRS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; Mccleary, S. L.
1991-01-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Cheng, Karen Elizabeth; Crary, David J; Ray, Jaideep; Safta, Cosmin
2013-01-01
Objective We discuss the use of structural models for the analysis of biosurveillance related data. Methods and results Using a combination of real and simulated data, we have constructed a data set that represents a plausible time series resulting from surveillance of a large scale bioterrorist anthrax attack in Miami. We discuss the performance of anomaly detection with structural models for these data using receiver operating characteristic (ROC) and activity monitoring operating characteristic (AMOC) analysis. In addition, we show that these techniques provide a method for predicting the level of the outbreak valid for approximately 2 weeks, post-alarm. Conclusions Structural models provide an effective tool for the analysis of biosurveillance data, in particular for time series with noisy, non-stationary background and missing data. PMID:23037798
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach
Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco
2015-01-01
Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901
NASA Astrophysics Data System (ADS)
Lin, J. W. B.
2015-12-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiledlanguages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionalityavailable with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Pythonto create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran, to optimize model performance but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are interfaced. This capability rapidly provides the high-fidelity results needed in the early design phase. Moreover, the capability is applicable to the general field of engineering science and mechanics. Hence, it provides a collaborative capability that accounts for interactions among engineering analysis methods.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
Björ, Ove; Damber, Lena; Jonsson, Håkan; Nilsson, Tohr
2015-07-01
Iron-ore miners are exposed to extremely dusty and physically arduous work environments. The demanding activities of mining select healthier workers with longer work histories (ie, the Healthy Worker Survivor Effect (HWSE)), and could have a reversing effect on the exposure-response association. The objective of this study was to evaluate an iron-ore mining cohort to determine whether the effect of respirable dust was confounded by the presence of an HWSE. When an HWSE exists, standard modelling methods, such as Cox regression analysis, produce biased results. We compared results from g-estimation of accelerated failure-time modelling adjusted for HWSE with corresponding unadjusted Cox regression modelling results. For all-cause mortality when adjusting for the HWSE, cumulative exposure from respirable dust was associated with a 6% decrease of life expectancy if exposed ≥15 years, compared with never being exposed. Respirable dust continued to be associated with mortality after censoring outcomes known to be associated with dust when adjusting for the HWSE. In contrast, results based on Cox regression analysis did not support that an association was present. The adjustment for the HWSE made a difference when estimating the risk of mortality from respirable dust. The results of this study, therefore, support the recommendation that standard methods of analysis should be complemented with structural modelling analysis techniques, such as g-estimation of accelerated failure-time modelling, to adjust for the HWSE. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ERIC Educational Resources Information Center
de Rooij, Mark; Heiser, Willem J.
2005-01-01
Although RC(M)-association models have become a generally useful tool for the analysis of cross-classified data, the graphical representation resulting from such an analysis can at times be misleading. The relationships present between row category points and column category points cannot be interpreted by inter point distances but only through…
Following the Part I paper that described an application of the U.S. EPA Models-3/Community Multiscale Air Quality (CMAQ) modeling system to the 1999 Southern Oxidants Study episode, this paper presents results from process analysis (PA) using the PA tool embedded in CMAQ and s...
Computer code for off-design performance analysis of radial-inflow turbines with rotor blade sweep
NASA Technical Reports Server (NTRS)
Meitner, P. L.; Glassman, A. J.
1983-01-01
The analysis procedure of an existing computer program was extended to include rotor blade sweep, to model the flow more accurately at the rotor exit, and to provide more detail to the loss model. The modeling changes are described and all analysis equations and procedures are presented. Program input and output are described and are illustrated by an example problem. Results obtained from this program and from a previous program are compared with experimental data.
Automation for System Safety Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
2009-01-01
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D
2018-08-01
Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
P.L. Tedder; R.N. La Mont; J.C. Kincaid
1987-01-01
TRIM (Timber Resource Inventory Model) is a yield table projection system developed for timber supply projections and policy analysis. TRIM simulates timber growth, inventories, management and area changes, and removals over the projection period. Programs in the TRIM system, card-by-card descriptions of required inputs, table formats, and sample results are presented...
ERIC Educational Resources Information Center
Ferrando, Pere J.
2008-01-01
This paper develops results and procedures for obtaining linear composites of factor scores that maximize: (a) test information, and (b) validity with respect to external variables in the multiple factor analysis (FA) model. I treat FA as a multidimensional item response theory model, and use Ackerman's multidimensional information approach based…
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
NASA Astrophysics Data System (ADS)
Vukovic, Ana; Vujadinovic, Mirjam; Djurdjevic, Vladimir; Cvetkovic, Bojan; Djordjevic, Marija; Ruml, Mirjana; Rankovic-Vasic, Zorica; Przic, Zoran; Stojicic, Djurdja; Krzic, Aleksandra; Rajkovic, Borivoj
2015-04-01
Serbia is a country with relatively small scale terrain features with economy mostly based on local landowners' agricultural production. Climate change analysis must be downscaled accordingly, to recognize climatological features of the farmlands. Climate model simulations and impact studies significantly contribute to the future strategic planning in economic development and therefore impact analysis must be approached with high level of confidence. This paper includes research related to climate change and impacts in Serbia resulted from cooperative work of the modeling and user community. Dynamical downscaling of climate projections for the 21st century with multi-model approach and statistical bias correction are done in order to prepare model results for impact studies. Presented results are from simulations performed using regional EBU-POM model, which is forced with A1B and A2 SRES/IPCC (2007) with comparative analysis with other regional models and from the latest high resolution NMMB simulations forced with RCP8.5 IPCC scenario (2012). Application of bias correction of the model results is necessary when calculated indices are not linearly dependent on the model results and delta approach in presenting results with respect to present climate simulations is insufficient. This is most important during the summer over the north part of the country where model bias produce much higher temperatures and less precipitation, which is known as "summer drying problem" and is common in regional models' simulations over the Pannonian valley. Some of the results, which are already observed in present climate, like higher temperatures and disturbance in the precipitation pattern, lead to present and future advancement of the start of the vegetation period toward earlier dates, associated with an increased risk of the late spring frost, extended vegetation period, disturbed preparation for the rest period, increased duration and frequency of the draught periods, etc. Based on the projected climate changes an application is proposed of the ensemble seasonal forecasts for early preparation in case of upcoming unfavorable weather conditions. This paper was realized as a part of the projects "Studying climate change and its influence on the environment: impacts, adaptation and mitigation" (43007) and "Assessment of climate change impacts on water resources in Serbia" (37005) financed by the Ministry of Education and Science of the Republic of Serbia within the framework of integrated and interdisciplinary research for the period 2011-2015.
Modeling and analysis of visual digital impact model for a Chinese human thorax.
Zhu, Jin; Wang, Kai-Ming; Li, Shu; Liu, Hai-Yan; Jing, Xiao; Li, Xiao-Fang; Liu, Yi-He
2017-01-01
To establish a three-dimensional finite element model of the human chest for engineering research on individual protection. Computed tomography (CT) scanning data were used for three-dimensional reconstruction with the medical image reconstruction software Mimics. The finite element method (FEM) preprocessing software ANSYS ICEM CFD was used for cell mesh generation, and the relevant material behavior parameters of all of the model's parts were specified. The finite element model was constructed with the FEM software, and the model availability was verified based on previous cadaver experimental data. A finite element model approximating the anatomical structure of the human chest was established, and the model's simulation results conformed to the results of the cadaver experiment overall. Segment data of the human body and specialized software can be utilized for FEM model reconstruction to satisfy the need for numerical analysis of shocks to the human chest in engineering research on body mechanics.
Gaze distribution analysis and saliency prediction across age groups.
Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu
2018-01-01
Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.
TASS Model Application for Testing the TDWAP Model
NASA Technical Reports Server (NTRS)
Switzer, George F.
2009-01-01
One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.
Integration of GIS and Bim for Indoor Geovisual Analytics
NASA Astrophysics Data System (ADS)
Wu, B.; Zhang, S.
2016-06-01
This paper presents an endeavour of integration of GIS (Geographical Information System) and BIM (Building Information Modelling) for indoor geovisual analytics. The merits of two types of technologies, GIS and BIM are firstly analysed in the context of indoor environment. GIS has well-developed capabilities of spatial analysis such as network analysis, while BIM has the advantages for indoor 3D modelling and dynamic simulation. This paper firstly investigates the important aspects for integrating GIS and BIM. Different data standards and formats such as the IFC (Industry Foundation Classes) and GML (Geography Markup Language) are discussed. Their merits and limitations in data transformation between GIS and BIM are analysed in terms of semantic and geometric information. An optimized approach for data exchange between GIS and BIM datasets is then proposed. After that, a strategy of using BIM for 3D indoor modelling, GIS for spatial analysis, and BIM again for visualization and dynamic simulation of the analysis results is presented. Based on the developments, this paper selects a typical problem, optimized indoor emergency evacuation, to demonstrate the integration of GIS and BIM for indoor geovisual analytics. The block Z of the Hong Kong Polytechnic University is selected as a test site. Detailed indoor and outdoor 3D models of the block Z are created using a BIM software Revit. The 3D models are transferred to a GIS software ArcGIS to carry out spatial analysis. Optimized evacuation plans considering dynamic constraints are generated based on network analysis in ArcGIS assuming there is a fire accident inside the building. The analysis results are then transferred back to BIM software for visualization and dynamic simulation. The developed methods and results are of significance to facilitate future development of GIS and BIM integrated solutions in various applications.
A study of zodiacal light models
NASA Technical Reports Server (NTRS)
Gary, G. A.; Craven, P. D.
1973-01-01
A review is presented of the basic equations used in the analysis of photometric observations of zodiacal light. A survey of the methods used to model the zodiacal light in and out of the ecliptic is given. Results and comparison of various models are presented, as well as recent results by the authors.
Validation of numerical models for flow simulation in labyrinth seals
NASA Astrophysics Data System (ADS)
Frączek, D.; Wróblewski, W.
2016-10-01
CFD results were compared with the results of experiments for the flow through the labyrinth seal. RANS turbulence models (k-epsilon, k-omega, SST and SST-SAS) were selected for the study. Steady and transient results were analyzed. ANSYS CFX was used for numerical computation. The analysis included flow through sealing section with the honeycomb land. Leakage flows and velocity profiles in the seal were compared. In addition to the comparison of computational models, the divergence of modeling and experimental results has been determined. Tips for modeling these problems were formulated.
Combat Simulation Using Breach Computer Language
1979-09-01
simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model
Wang, Chong; Sun, Qun; Wahab, Magd Abdel; Zhang, Xingyu; Xu, Limin
2015-09-01
Rotary cup brushes mounted on each side of a road sweeper undertake heavy debris removal tasks but the characteristics have not been well known until recently. A Finite Element (FE) model that can analyze brush deformation and predict brush characteristics have been developed to investigate the sweeping efficiency and to assist the controller design. However, the FE model requires large amount of CPU time to simulate each brush design and operating scenario, which may affect its applications in a real-time system. This study develops a mathematical regression model to summarize the FE modeled results. The complex brush load characteristic curves were statistically analyzed to quantify the effects of cross-section, length, mounting angle, displacement and rotational speed etc. The data were then fitted by a multiple variable regression model using the maximum likelihood method. The fitted results showed good agreement with the FE analysis results and experimental results, suggesting that the mathematical regression model may be directly used in a real-time system to predict characteristics of different brushes under varying operating conditions. The methodology may also be used in the design and optimization of rotary brush tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kenigsberg, I. J.; Dean, M. W.; Malatino, R.
1974-01-01
The correlation achieved with each program provides the material for a discussion of modeling techniques developed for general application to finite-element dynamic analyses of helicopter airframes. Included are the selection of static and dynamic degrees of freedom, cockpit structural modeling, and the extent of flexible-frame modeling in the transmission support region and in the vicinity of large cut-outs. The sensitivity of predicted results to these modeling assumptions are discussed. Both the Sikorsky Finite-Element Airframe Vibration analysis Program (FRAN/Vibration Analysis) and the NASA Structural Analysis Program (NASTRAN) have been correlated with data taken in full-scale vibration tests of a modified CH-53A helicopter.
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
A Field-Effect Transistor (FET) model for ASAP
NASA Technical Reports Server (NTRS)
Ming, L.
1965-01-01
The derivation of the circuitry of a field effect transistor (FET) model, the procedure for adapting the model to automated statistical analysis program (ASAP), and the results of applying ASAP on this model are described.
NASA Astrophysics Data System (ADS)
Alekseenko, M. A.; Gendrina, I. Yu.
2017-11-01
Recently, due to the abundance of various types of observational data in the systems of vision through the atmosphere and the need for their processing, the use of various methods of statistical research in the study of such systems as correlation-regression analysis, dynamic series, variance analysis, etc. is actual. We have attempted to apply elements of correlation-regression analysis for the study and subsequent prediction of the patterns of radiation transfer in these systems same as in the construction of radiation models of the atmosphere. In this paper, we present some results of statistical processing of the results of numerical simulation of the characteristics of vision systems through the atmosphere obtained with the help of a special software package.1
Cui, Shuqi; Hong, Ning; Shi, Baochang; Chai, Zhenhua
2016-04-01
In this paper, we will focus on the multiple-relaxation-time (MRT) lattice Boltzmann model for two-dimensional convection-diffusion equations (CDEs), and analyze the discrete effect on the halfway bounce-back (HBB) boundary condition (or sometimes called bounce-back boundary condition) of the MRT model where three different discrete velocity models are considered. We first present a theoretical analysis on the discrete effect of the HBB boundary condition for the simple problems with a parabolic distribution in the x or y direction, and a numerical slip proportional to the second-order of lattice spacing is observed at the boundary, which means that the MRT model has a second-order convergence rate in space. The theoretical analysis also shows that the numerical slip can be eliminated in the MRT model through tuning the free relaxation parameter corresponding to the second-order moment, while it cannot be removed in the single-relaxation-time model or the Bhatnagar-Gross-Krook model unless the relaxation parameter related to the diffusion coefficient is set to be a special value. We then perform some simulations to confirm our theoretical results, and find that the numerical results are consistent with our theoretical analysis. Finally, we would also like to point out the present analysis can be extended to other boundary conditions of lattice Boltzmann models for CDEs.
Peeters, Yvette; Boersma, Sandra N; Koopman, Hendrik M
2008-01-01
Background Aim of this study is to further explore predictors of health related quality of life in children with asthma using factors derived from to the extended stress-coping model. While the stress-coping model has often been used as a frame of reference in studying health related quality of life in chronic illness, few have actually tested the model in children with asthma. Method In this survey study data were obtained by means of self-report questionnaires from seventy-eight children with asthma and their parents. Based on data derived from these questionnaires the constructs of the extended stress-coping model were assessed, using regression analysis and path analysis. Results The results of both regression analysis and path analysis reveal tentative support for the proposed relationships between predictors and health related quality of life in the stress-coping model. Moreover, as indicated in the stress-coping model, HRQoL is only directly predicted by coping. Both coping strategies 'emotional reaction' (significantly) and 'avoidance' are directly related to HRQoL. Conclusion In children with asthma, the extended stress-coping model appears to be a useful theoretical framework for understanding the impact of the illness on their quality of life. Consequently, the factors suggested by this model should be taken into account when designing optimal psychosocial-care interventions. PMID:18366753
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.
2008-12-01
Designing and implementing a hydro-economic computer model to support or facilitate collaborative decision making among multiple stakeholders or users can be challenging and daunting. Collaborative modeling is distinguished and more difficult than non-collaborative efforts because of a large number of users with different backgrounds, disagreement or conflict among stakeholders regarding problem definitions, modeling roles, and analysis methods, plus evolving ideas of model scope and scale and needs for information and analysis as stakeholders interact, use the model, and learn about the underlying water system. This presentation reviews the lifecycle for collaborative model making and identifies some key design decisions that stakeholders and model developers must make to develop robust and trusted, verifiable and transparent, integrated and flexible, and ultimately useful models. It advances some best practices to implement and program these decisions. Among these best practices are 1) modular development of data- aware input, storage, manipulation, results recording and presentation components plus ways to couple and link to other models and tools, 2) explicitly structure both input data and the meta data that describes data sources, who acquired it, gaps, and modifications or translations made to put the data in a form usable by the model, 3) provide in-line documentation on model inputs, assumptions, calculations, and results plus ways for stakeholders to document their own model use and share results with others, and 4) flexibly program with graphical object-oriented properties and elements that allow users or the model maintainers to easily see and modify the spatial, temporal, or analysis scope as the collaborative process moves forward. We draw on examples of these best practices from the existing literature, the author's prior work, and some new applications just underway. The presentation concludes by identifying some future directions for collaborative modeling including geo-spatial display and analysis, real-time operations, and internet-based tools plus the design and programming needed to implement these capabilities.
Nonlinear multi-analysis of agent-based financial market dynamics by epidemic system
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun; Niu, Hongli
2015-10-01
Based on the epidemic dynamical system, we construct a new agent-based financial time series model. In order to check and testify its rationality, we compare the statistical properties of the time series model with the real stock market indices, Shanghai Stock Exchange Composite Index and Shenzhen Stock Exchange Component Index. For analyzing the statistical properties, we combine the multi-parameter analysis with the tail distribution analysis, the modified rescaled range analysis, and the multifractal detrended fluctuation analysis. For a better perspective, the three-dimensional diagrams are used to present the analysis results. The empirical research in this paper indicates that the long-range dependence property and the multifractal phenomenon exist in the real returns and the proposed model. Therefore, the new agent-based financial model can recurrence some important features of real stock markets.
[Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].
Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping
2014-06-01
The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.
Dynamical analysis of cigarette smoking model with a saturated incidence rate
NASA Astrophysics Data System (ADS)
Zeb, Anwar; Bano, Ayesha; Alzahrani, Ebraheem; Zaman, Gul
2018-04-01
In this paper, we consider a delayed smoking model in which the potential smokers are assumed to satisfy the logistic equation. We discuss the dynamical behavior of our proposed model in the form of Delayed Differential Equations (DDEs) and show conditions for asymptotic stability of the model in steady state. We also discuss the Hopf bifurcation analysis of considered model. Finally, we use the nonstandard finite difference (NSFD) scheme to show the results graphically with help of MATLAB.
Dynamic modeling of brushless dc motors for aerospace actuation
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Nehl, T. W.
1980-01-01
A discrete time model for simulation of the dynamics of samarium cobalt-type permanent magnet brushless dc machines is presented. The simulation model includes modeling of the interaction between these machines and their attached power conditioners. These are transistorized conditioner units. This model is part of an overall discrete-time analysis of the dynamic performance of electromechanical actuators, which was conducted as part of prototype development of such actuators studied and built for NASA-Johnson Space Center as a prospective alternative to hydraulic actuators presently used in shuttle orbiter applications. The resulting numerical simulations of the various machine and power conditioner current and voltage waveforms gave excellent correlation to the actual waveforms collected from actual hardware experimental testing. These results, numerical and experimental, are presented here for machine motoring, regeneration and dynamic braking modes. Application of the resulting model to the determination of machine current and torque profiles during closed-loop actuator operation were also analyzed and the results are given here. These results are given in light of an overall view of the actuator system components. The applicability of this method of analysis to design optimization and trouble-shooting in such prototype development is also discussed in light of the results at hand.
Model analysis for the MAGIC telescope
NASA Astrophysics Data System (ADS)
Mazin, D.; Bigongiari, C.; Goebel, F.; Moralejo, A.; Wittek, W.
The MAGIC Collaboration operates the 17m imaging Cherenkov telescope on the Canary island La Palma. The main goal of the experiment is an energy threshold below 100 GeV for primary gamma rays. The new analysis technique (model analysis) takes advantage of the high resolution (both in space and time) camera by fitting the averaged expected templates of the shower development to the measured shower images in the camera. This approach allows to recognize and reconstruct images just above the level of the night sky background light fluctuations. Progress and preliminary results of the model analysis technique will be presented.
Analysis and topology optimization design of high-speed driving spindle
NASA Astrophysics Data System (ADS)
Wang, Zhilin; Yang, Hai
2018-04-01
The three-dimensional model of high-speed driving spindle is established by using SOLIDWORKS. The model is imported through the interface of ABAQUS, A finite element analysis model of high-speed driving spindle was established by using spring element to simulate bearing boundary condition. High-speed driving spindle for the static analysis, the spindle of the stress, strain and displacement nephogram, and on the basis of the results of the analysis on spindle for topology optimization, completed the lightweight design of high-speed driving spindle. The design scheme provides guidance for the design of axial parts of similar structures.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
Vahedi, Shahram; Farrokhi, Farahman
2011-01-01
Objective The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Statistics Anxiety Measure (SAM), proposed by Earp. Method The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. Confirmatory factor analysis (CFA) was carried out to determine the factor structures of the Persian adaptation of SAM. Results As expected, the second order model provided a better fit to the data than the three alternative models. Conclusions Hence, SAM provides an equally valid measure for use among college students. The study both expands and adds support to the existing body of math anxiety literature. PMID:22952530
von Eye, Alexander; Mun, Eun Young; Bogat, G Anne
2008-03-01
This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting alpha, and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify those temporal patterns that stand out as more frequent (CFA types) or less frequent (CFA antitypes) than expected with reference to a base model. A base model that has been used frequently in CFA applications, prediction CFA, and a new base model, auto-association CFA, are discussed for analysis of cross-classifications of longitudinal data. The former base model takes the associations among predictors and among criteria into account. The latter takes the auto-associations among repeatedly observed variables into account. Application examples of each are given using data from a longitudinal study of domestic violence. It is demonstrated that CFA results are not redundant with results from log-linear modeling or multinomial regression and that, of these approaches, CFA shows particular utility when conducting person-oriented research.
Static Aeroelastic Analysis with an Inviscid Cartesian Method
NASA Technical Reports Server (NTRS)
Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian; Smith, Stephen C.
2014-01-01
An embedded-boundary Cartesian-mesh flow solver is coupled with a three degree-offreedom structural model to perform static, aeroelastic analysis of complex aircraft geometries. The approach solves the complete system of aero-structural equations using a modular, loosely-coupled strategy which allows the lower-fidelity structural model to deform the highfidelity CFD model. The approach uses an open-source, 3-D discrete-geometry engine to deform a triangulated surface geometry according to the shape predicted by the structural model under the computed aerodynamic loads. The deformation scheme is capable of modeling large deflections and is applicable to the design of modern, very-flexible transport wings. The interface is modular so that aerodynamic or structural analysis methods can be easily swapped or enhanced. This extended abstract includes a brief description of the architecture, along with some preliminary validation of underlying assumptions and early results on a generic 3D transport model. The final paper will present more concrete cases and validation of the approach. Preliminary results demonstrate convergence of the complete aero-structural system and investigate the accuracy of the approximations used in the formulation of the structural model.
APPLE - An aeroelastic analysis system for turbomachines and propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral
1992-01-01
This paper reviews aeroelastic analysis methods for propulsion elements (advanced propellers, compressors and turbines) being developed and used at NASA Lewis Research Center. These aeroelastic models include both structural and aerodynamic components. The structural models include the typical section model, the beam model with and without disk flexibility, and the finite element blade model with plate bending elements. The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation for a cascade to the three-dimensional Euler equations for multi-blade configurations. Typical results are presented for each aeroelastic model. Suggestions for further research are indicated. All the available aeroelastic models and analysis methods are being incorporated into a unified computer program named APPLE (Aeroelasticity Program for Propulsion at LEwis).
Towards accurate modeling of noncovalent interactions for protein rigidity analysis
2013-01-01
Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209
qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2008-10-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model
NASA Astrophysics Data System (ADS)
Lin, J. W.-B.
2009-02-01
Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.
Ge, Xiaoliang; Theuwissen, Albert J P
2018-02-27
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †
Theuwissen, Albert J. P.
2018-01-01
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.
Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less
Post, Ellen S.; Grambsch, Anne; Weaver, Chris; Morefield, Philip; Leung, Lai-Yung; Nolte, Christopher G.; Adams, Peter; Liang, Xin-Zhong; Zhu, Jin-Hong; Mahoney, Hardee
2012-01-01
Background: Future climate change may cause air quality degradation via climate-induced changes in meteorology, atmospheric chemistry, and emissions into the air. Few studies have explicitly modeled the potential relationships between climate change, air quality, and human health, and fewer still have investigated the sensitivity of estimates to the underlying modeling choices. Objectives: Our goal was to assess the sensitivity of estimated ozone-related human health impacts of climate change to key modeling choices. Methods: Our analysis included seven modeling systems in which a climate change model is linked to an air quality model, five population projections, and multiple concentration–response functions. Using the U.S. Environmental Protection Agency’s (EPA’s) Environmental Benefits Mapping and Analysis Program (BenMAP), we estimated future ozone (O3)-related health effects in the United States attributable to simulated climate change between the years 2000 and approximately 2050, given each combination of modeling choices. Health effects and concentration–response functions were chosen to match those used in the U.S. EPA’s 2008 Regulatory Impact Analysis of the National Ambient Air Quality Standards for O3. Results: Different combinations of methodological choices produced a range of estimates of national O3-related mortality from roughly 600 deaths avoided as a result of climate change to 2,500 deaths attributable to climate change (although the large majority produced increases in mortality). The choice of the climate change and the air quality model reflected the greatest source of uncertainty, with the other modeling choices having lesser but still substantial effects. Conclusions: Our results highlight the need to use an ensemble approach, instead of relying on any one set of modeling choices, to assess the potential risks associated with O3-related human health effects resulting from climate change. PMID:22796531
Robustness analysis of a green chemistry-based model for the ...
This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier developed model for the same purpose to investigate concordance between the models and potential decision support synergies. A three-phase procedure was adopted to achieve the research objectives. Firstly, an ordinal ranking of the evaluation criteria used to characterize the implementation of green chemistry principles was identified through relative ranking analysis. Secondly, a structured selection process for an MCDA classification method was conducted, which ensued in the identification of Stochastic Multi-Criteria Acceptability Analysis (SMAA). Lastly, the agreement of the classifications by the two MCDA models and the resulting synergistic role of decision recommendations were studied. This comparison showed that the results of the two models agree between 76% and 93% of the simulation set-ups and it confirmed that different MCDA models provide a more inclusive and transparent set of recommendations. This integrative research confirmed the beneficial complementary use of MCDA methods to aid responsible development of nanosynthesis, by accounting for multiple objectives and helping communication of complex information in a comprehensive and traceable format, suitable for stakeholders and
Analysis of Polder Polarization Measurements During Astex and Eucrex Experiments
NASA Technical Reports Server (NTRS)
Chen, Hui; Han, Qingyuan; Chou, Joyce; Welch, Ronald M.
1997-01-01
Polarization is more sensitive than intensity to cloud microstructure such as the particle size and shape, and multiple scattering does not wash out features in polarization as effectively as it does in the intensity. Polarization measurements, particularly in the near IR, are potentially a valuable tool for cloud identification and for studies of the microphysics of clouds. The POLDER instrument is designed to provide wide field of view bidirectional images in polarized light. During the ASTEX-SOFIA campaign on June 12th, 1992, over the Atlantic Ocean (near the Azores Islands), images of homogeneous thick stratocumulus cloud fields were acquired. During the EUCREX'94 (April, 1994) campaign, the POLDER instrument was flying over the region of Brittany (France), taking observations of cirrus clouds. This study involves model studies and data analysis of POLDER observations. Both models and data analysis show that POLDER can be used to detect cloud thermodynamic phases. Model results show that polarized reflection in the Lamda =0.86 micron band is sensitive to cloud droplet sizes but not to cloud optical thickness. Comparison between model and data analysis reveals that cloud droplet sizes during ASTEX are about 5 microns, which agrees very well with the results of in situ measurements (4-5 microns). Knowing the retrieved cloud droplet sizes, the total reflected intensity of the POLDER measurements then can be used to retrieve cloud optical thickness. The close agreement between data analysis and model results during ASTEX also suggests the homogeneity of the cloud layer during that campaign.
NASA Astrophysics Data System (ADS)
Zubir, S. N. A.; Thiruchelvam, S.; Mustapha, K. N. M.; Che Muda, Z.; Ghazali, A.; Hakimie, H.
2017-12-01
For the past few years, natural disaster has been the subject of debate in disaster management especially in flood disaster. Each year, natural disaster results in significant loss of life, destruction of homes and public infrastructure, and economic hardship. Hence, an effective and efficient flood disaster management would assure non-futile efforts for life saving. The aim of this article is to examine the relationship between approach, decision maker, influence factor, result, and ethic to decision making for flood disaster management in Malaysia. The key elements of decision making in the disaster management were studied based on the literature. Questionnaire surveys were administered among lead agencies at East Coast of Malaysia in the state of Kelantan and Pahang. A total of 307 valid responses had been obtained for further analysis. Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were carried out to analyse the measurement model involved in the study. The CFA for second-order reflective and first-order reflective measurement model indicates that approach, decision maker, influence factor, result, and ethic have a significant and direct effect on decision making during disaster. The results from this study showed that decision- making during disaster is an important element for disaster management to necessitate a successful collaborative decision making. The measurement model is accepted to proceed with further analysis known as Structural Equation Modeling (SEM) and can be assessed for the future research.
Yang, Lijuan; Zhou, Xianghai; Luo, Yingying; Sun, Xiuqin; Tang, Yong; Guo, Wulan; Han, Xueyao; Ji, Linong
2012-01-01
A number of studies have been performed to identify the association between potassium inwardly-rectifying channel, subfamily J, member 11 (KCNJ11) gene and type 2 diabetes mellitus (T2DM) in East Asian populations, with inconsistent results. The main aim of this work was to evaluate more precisely the genetic influence of KCNJ11 on T2DM in East Asian populations by means of a meta-analysis. We identified 20 articles for qualitative analysis and 16 were eligible for quantitative analysis (meta-analysis) by database searching up to May 2010. The association was assessed under different genetic models, and the pooled odds ratios (ORs) with 95% confidence intervals (95% CIs) were calculated. The allelic and genotypic contrast demonstrated that the association between KCNJ11 and T2DM was significant for rs5210. However, not all results for rs5215 and rs5218 showed significant associations. For rs5219, the combined ORs (95% CIs) for allelic contrast, dominant and recessive models contrast (with allelic frequency and genotypic distribution data) were 1.139 (1.093-1.188), 1.177 (1.099-1.259) and 1.207 (1.094-1.332), respectively (random effect model). The analysis on the most completely adjusted ORs (95% CIs) by the covariates of rs5219 all presented significant associations under different genetic models. Population-stratified analysis (Korean, Japanese and Chinese) and sensitivity analysis verified the significant results. Cumulative meta-analysis including publication time and sample size illustrated the exaggerated genetic effect in the earliest studies. Heterogeneity and publication bias were assessed. Our study verified that single nucleotide polymorphisms (SNPs) of KCNJ11 gene were significantly associated with the risk of T2DM in East Asian populations.
Transportation Systems Evaluation
NASA Technical Reports Server (NTRS)
Fanning, M. L.; Michelson, R. A.
1972-01-01
A methodology for the analysis of transportation systems consisting of five major interacting elements is reported. The analysis begins with the causes of travel demand: geographic, economic, and demographic characteristics as well as attitudes toward travel. Through the analysis, the interaction of these factors with the physical and economic characteristics of the transportation system is determined. The result is an evaluation of the system from the point of view of both passenger and operator. The methodology is applicable to the intraurban transit systems as well as major airlines. Applications of the technique to analysis of a PRT system and a study of intraurban air travel are given. In the discussion several unique models or techniques are mentioned: i.e., passenger preference modeling, an integrated intraurban transit model, and a series of models to perform airline analysis.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
NASA Astrophysics Data System (ADS)
Niwa, Yuta; Akiyama, Yuji; Naruta, Tomokazu
We carried out FEM simulations for modeling ultra-high-speed universal motors by using the state function method and analyzed the phenomenon of commutator sparking, the characteristics of the air gap surface, and the contact condition or contact resistance of the brushes and commutator bars. Thus, we could quantitatively analyze commutator sparking and investigate the configuration of the iron core. The results of FEM analysis were used to develop a model for predicting the configuration of the iron core and for estimating the electromotive force generated by the transformer, armature reaction field, spark voltage, contact resistance between the rotating brushes, and changes in the gap permeance. The results of our simulation were experimental results. This confirmed the validity of our analysis method. Thus, an ultra-high-speed, high-capacity of 1.5kw motor rotating at 30,000rpm can be designed for use in vacuum cleaners.
Source apportion of atmospheric particulate matter: a joint Eulerian/Lagrangian approach.
Riccio, A; Chianese, E; Agrillo, G; Esposito, C; Ferrara, L; Tirimberio, G
2014-12-01
PM2.5 samples were collected during an annual monitoring campaign (January 2012-January 2013) in the urban area of Naples, one of the major cities in Southern Italy. Samples were collected by means of a standard gravimetric sampler (Tecora Echo model) and characterized from a chemical point of view by ion chromatography. As a result, 143 samples together with their ionic composition have been collected. We extend traditional source apportionment techniques, usually based on multivariate factor analysis, interpreting the chemical analysis results within a Lagrangian framework. The Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) model was used, providing linkages to the source regions in the upwind areas. Results were analyzed in order to quantify the relative weight of different source types/areas. Model results suggested that PM concentrations are strongly affected not only by local emissions but also by transboundary emissions, especially from the Eastern and Northern European countries and African Saharan dust episodes.
NASA Astrophysics Data System (ADS)
Adib, A.; Afzal, P.; Heydarzadeh, K.
2015-01-01
The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modelling reveal that proper soil types are located around the central city. The results derived via the fractal modelling were utilized to improve the Nogoshi and Igarashi (1970, 1971) classification results in the Meybod city. The resulting categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.
Site effect classification based on microtremor data analysis using concentration-area fractal model
NASA Astrophysics Data System (ADS)
Adib, A.; Afzal, P.; Heydarzadeh, K.
2014-07-01
The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, Central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modeling reveal that proper soil types are located around the central city. The results derived via the fractal modeling were utilized to improve the Nogoshi's classification results in the Meybod city. The resulted categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
NASA Astrophysics Data System (ADS)
Cunningham, Jessica D.
Newton's Universe (NU), an innovative teacher training program, strives to obtain measures from rural, middle school science teachers and their students to determine the impact of its distance learning course on understanding of temperature. No consensus exists on the most appropriate and useful method of analysis to measure change in psychological constructs over time. Several item response theory (IRT) models have been deemed useful in measuring change, which makes the choice of an IRT model not obvious. The appropriateness and utility of each model, including a comparison to a traditional analysis of variance approach, was investigated using middle school science student performance on an assessment over an instructional period. Predetermined criteria were outlined to guide model selection based on several factors including research questions, data properties, and meaningful interpretations to determine the most appropriate model for this study. All methods employed in this study reiterated one common interpretation of the data -- specifically, that the students of teachers with any NU course experience had significantly greater gains in performance over the instructional period. However, clear distinctions were made between an analysis of variance and the racked and stacked analysis using the Rasch model. Although limited research exists examining the usefulness of the Rasch model in measuring change in understanding over time, this study applied these methods and detailed plausible implications for data-driven decisions based upon results for NU and others. Being mindful of the advantages and usefulness of each method of analysis may help others make informed decisions about choosing an appropriate model to depict changes to evaluate other programs. Results may encourage other researchers to consider the meaningfulness of using IRT for this purpose. Results have implications for data-driven decisions for future professional development courses, in science education and other disciplines. KEYWORDS: Item Response Theory, Rasch Model, Racking and Stacking, Measuring Change in Student Performance, Newton's Universe teacher training
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang
2014-05-01
Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.
NASA Astrophysics Data System (ADS)
Filizola, Marta; Carteni-Farina, Maria; Perez, Juan J.
1999-07-01
3D models of the opioid receptors μ, δ and κ were constructed using BUNDLE, an in-house program to build de novo models of G-protein coupled receptors at the atomic level. Once the three opioid receptors were constructed and before any energy refinement, models were assessed for their compatibility with the results available from point-site mutations carried out on these receptors. In a subsequent step, three selective antagonists to each of three receptors (naltrindole, naltrexone and nor-binaltorphamine) were docked onto each of the three receptors and subsequently energy minimized. The nine resulting complexes were checked for their ability to explain known results of structure-activity studies. Once the models were validated, analysis of the distances between different residues of the receptors and the ligands were computed. This analysis permitted us to identify key residues tentatively involved in direct interaction with the ligand.
Learjet Model 55 Wing Analysis with Landing Loads
NASA Technical Reports Server (NTRS)
Boroughs, R. R.
1985-01-01
The NASTRAN analysis was used to determine the impact of new landing loads on the Learjet Model 55 wing. These new landing loads were the result of a performance improvement effort to increase the landing weight of the aircraft to 18,000 lbs. from 17,000 lbs. and extend the life of the tires and brakes by incorporating larger tires and heavy duty brakes. Landing loads for the original 17,000 lb. airplane landing configuration were applied to the full airplane NASTRAN model. The analytical results were correlated with the strain gage data from the original landing load static tests. The landing loads for the 18,000 lb. airplane were applied to the full airplane NASTRAN model, and a comparison was made with the original Model 55 data. The results of this comparison enable Learjet to determine the difference in stress distribution in the wing due to these two different sets of landing loads.
Modeling of information diffusion in Twitter-like social networks under information overload.
Li, Pei; Li, Wei; Wang, Hui; Zhang, Xin
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations.
Modeling of Information Diffusion in Twitter-Like Social Networks under Information Overload
Li, Wei
2014-01-01
Due to the existence of information overload in social networks, it becomes increasingly difficult for users to find useful information according to their interests. This paper takes Twitter-like social networks into account and proposes models to characterize the process of information diffusion under information overload. Users are classified into different types according to their in-degrees and out-degrees, and user behaviors are generalized into two categories: generating and forwarding. View scope is introduced to model the user information-processing capability under information overload, and the average number of times a message appears in view scopes after it is generated by a given type user is adopted to characterize the information diffusion efficiency, which is calculated theoretically. To verify the accuracy of theoretical analysis results, we conduct simulations and provide the simulation results, which are consistent with the theoretical analysis results perfectly. These results are of importance to understand the diffusion dynamics in social networks, and this analysis framework can be extended to consider more realistic situations. PMID:24795541
Finotello, Alice; Morganti, Simone; Auricchio, Ferdinando
2017-09-01
In the last few years, several studies, each with different aim and modeling detail, have been proposed to investigate transcatheter aortic valve implantation (TAVI) with finite elements. The present work focuses on the patient-specific finite element modeling of the aortic valve complex. In particular, we aim at investigating how different modeling strategies in terms of material models/properties and discretization procedures can impact analysis results. Four different choices both for the mesh size (from 20 k elements to 200 k elements) and for the material model (from rigid to hyperelastic anisotropic) are considered. Different approaches for modeling calcifications are also taken into account. Post-operative CT data of the real implant are used as reference solution with the aim of outlining a trade-off between computational model complexity and reliability of the results. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Underwater striling engine design with modified one-dimensional model
NASA Astrophysics Data System (ADS)
Li, Daijin; Qin, Kan; Luo, Kai
2015-09-01
Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Kircher, J.E.; Dinicola, Richard S.; Middelburg, R.F.
1984-01-01
Monthly values were computed for water-quality constituents at four streamflow gaging stations in the Upper Colorado River basin for the determination of trends. Seasonal regression and seasonal Kendall trend analysis techniques were applied to two monthly data sets at each station site for four different time periods. A recently developed method for determining optimal water-discharge data-collection frequency was also applied to the monthly water-quality data. Trend analysis results varied with each monthly load computational method, period of record, and trend detection model used. No conclusions could be reached regarding which computational method was best to use in trend analysis. Time-period selection for analysis was found to be important with regard to intended use of the results. Seasonal Kendall procedures were found to be applicable to most data sets. Seasonal regression models were more difficult to apply and were sometimes of questionable validity; however, those results were more informative than seasonal Kendall results. The best model to use depends upon the characteristics of the data and the amount of trend information needed. The measurement-frequency optimization method had potential for application to water-quality data, but refinements are needed. (USGS)
Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel
2008-01-01
Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699
NASA Technical Reports Server (NTRS)
Bacmeister, Julio T.; Suarez, Max J.; Einaudi, Franco (Technical Monitor)
2001-01-01
This is the first of a two part study examining the connection of the equatorial momentum budget in an AGCM (Atmospheric General Circulation Model), with simulated equatorial surface wind stresses over the Pacific. The AGCM used in this study forms part of a newly developed coupled forecasting system used at NASA's Seasonal- to-Interannual Prediction Project. Here we describe the model and present results from a 20-year (1979-1999) AMIP-type experiment forced with observed SSTs (Sea Surface Temperatures). Model results are compared them with available observational data sets. The climatological pattern of extra-tropical planetary waves as well as their ENSO-related variability is found to agree quite well with re-analysis estimates. The model's surface wind stress is examined in detail, and reveals a reasonable overall simulation of seasonal interannual variability, as well as seasonal mean distributions. However, an excessive annual oscillation in wind stress over the equatorial central Pacific is found. We examine the model's divergent circulation over the tropical Pacific and compare it with estimates based on re-analysis data. These comparisons are generally good, but reveal excessive upper-level convergence in the central Pacific. In Part II of this study a direct examination of individual terms in the AGCM's momentum budget is presented. We relate the results of this analysis to the model's simulation of surface wind stress.
Child-Centered Play Therapy in the Schools: Review and Meta-Analysis
ERIC Educational Resources Information Center
Ray, Dee C.; Armstrong, Stephen A.; Balkin, Richard S.; Jayne, Kimberly M.
2015-01-01
The authors conducted a meta-analysis and systematic review that examined 23 studies evaluating the effectiveness of child centered play therapy (CCPT) conducted in elementary schools. Meta-analysis results were explored using a random effects model for mean difference and mean gain effect size estimates. Results revealed statistically significant…
Analysis of Compression Pad Cavities for the Orion Heatshield
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lessard, Victor R.; Jentink, Thomas N.; Zoby, Ernest V.
2009-01-01
Current results of a program for analysis of the compression pad cavities on the Orion heatshield are reviewed. The program was supported by experimental tests, engineering modeling, and applied computations with an emphasis on the latter presented in this paper. The computational tools and approach are described along with calculated results for wind tunnel and flight conditions. Correlations of the computed results are shown which can produce a credible prediction of heating augmentation due to cavity disturbances. The models developed for use in preliminary design of the Orion heatshield are presented.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
ERIC Educational Resources Information Center
Wang, Ning; Stahl, John
2012-01-01
This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those…
Operation, Modeling and Analysis of the Reverse Water Gas Shift Process
NASA Technical Reports Server (NTRS)
Whitlow, Jonathan E.
2001-01-01
The Reverse Water Gas Shift process is a candidate technology for water and oxygen production on Mars under the In-Situ Propellant Production project. This report focuses on the operation and analysis of the Reverse Water Gas Shift (RWGS) process, which has been constructed at Kennedy Space Center. A summary of results from the initial operation of the RWGS, process along with an analysis of these results is included in this report. In addition an evaluation of a material balance model developed from the work performed previously under the summer program is included along with recommendations for further experimental work.
Glacial isostatic adjustment on the Northern Hemisphere - new results from GRACE
NASA Astrophysics Data System (ADS)
Mueller, J.; Steffen, H.; Gitlein, O.; Denker, H.; Timmen, L.
2007-12-01
The Earth's gravity field mapped by the Gravity Recovery and Climate Experiment (GRACE) satellite mission shows variations due to the integral effect of mass variations in the atmosphere, hydrosphere and geosphere. The Earth's gravity field is provided in form of monthly solutions by several institutions, e.~g. GFZ Potsdam, CSR and JPL. During the GRACE standard processing of these analysis centers, oceanic and atmospheric contributions as well as tidal effects are reduced. The solutions of the analysis centers differ slightly, which is due the application of different reduction models and center-specific processing schemes. We present our investigation of mass variations in the areas of glacial isostatic adjustment (GIA) in North America and Northern Europe from GRACE data. One key issue is the separation of GIA parts and the reduction of the observed quantities by applying dedicated filters (e.~g. isotropic, non-isotropic, and destriping filters) and global models of hydrological variations (e.~g. WGHM, LaDWorld, GLDAS). In a further step, we analyze the results of both regions regarding their reliability, and finally present a comparison to results of a geodynamical modeling and absolute gravity measurements. Our results clearly show that the quality of the GRACE-derived gravity- change signal benefits from improved reduction models and chosen analysis techniques. Nevertheless, the comparison to results of geodynamic models still reveals differences, and thus further studies are in progress.
Composite Failures: A Comparison of Experimental Test Results and Computational Analysis Using XFEM
2016-09-30
NUWC-NPT Technical Report 12,218 30 September 2016 Composite Failures: A Comparison of Experimental Test Results and Computational Analysis...A Comparison of Experimental Test Results and Computational Analysis Using XFEM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...availability of measurement techniques, experimental testing of composite materials has largely outpaced the computational modeling ability, forcing
Modelling tsunami inundation for risk analysis at the Andaman Sea Coast of Thailand
NASA Astrophysics Data System (ADS)
Kaiser, G.; Kortenhaus, A.
2009-04-01
The mega-tsunami of Dec. 26, 2004 strongly impacted the Andaman Sea coast of Thailand and devastated coastal ecosystems as well as towns, settlements and tourism resorts. In addition to the tragic loss of many lives, the destruction or damage of life-supporting infrastructure, such as buildings, roads, water & power supply etc. caused high economic losses in the region. To mitigate future tsunami impacts there is a need to assess the tsunami hazard and vulnerability in flood prone areas at the Andaman Sea coast in order to determine the spatial distribution of risk and to develop risk management strategies. In the bilateral German-Thai project TRAIT research is performed on integrated risk assessment for the Provinces Phang Nga and Phuket in southern Thailand, including a hazard analysis, i.e. modelling tsunami propagation to the coast, tsunami wave breaking and inundation characteristics, as well as vulnerability analysis of the socio-economic and the ecological system in order to determine the scenario-based, specific risk for the region. In this presentation results of the hazard analysis and the inundation simulation are presented and discussed. Numerical modelling of tsunami propagation and inundation simulation is an inevitable tool for risk analysis, risk management and evacuation planning. While numerous investigations have been made to model tsunami wave generation and propagation in the Indian Ocean, there is still a lack in determining detailed inundation patterns, i.e. water depth and flow dynamics. However, for risk management and evacuation planning this knowledge is essential. As the accuracy of the inundation simulation is strongly depending on the available bathymetric and the topographic data, a multi-scale approach is chosen in this work. The ETOPO Global Relief Model as a bathymetric basis and the Shuttle Radar Topography Mission (SRTM90) have been widely applied in tsunami modelling approaches as these data are free and almost world-wide available. However, to model tsunami-induced inundation for risk analysis and management purposes the accuracy of these data is not sufficient as the processes in the near-shore zone cannot be modelled accurately enough and the spatial resolution of the topography is weak. Moreover, the SRTM data provide a digital surface model which includes vegetation and buildings in the surface description. To improve the data basis additional bathymetric data were used in the near shore zone of the Phang Nga and Phuket coastlines and various remote sensing techniques as well as additional GPS measurements were applied to derive a high resolution topography from satellite and airborne data. Land use classifications and filter methods were developed to correct the digital surface models to digital elevation models. Simulations were then performed with a non-linear shallow water model to model the 2004 Asian Tsunami and to simulate possible future ones. Results of water elevation near the coast were compared with field measurements and observations, and the influence of the resolution of the topography on inundation patterns like water depth, velocity, dispersion and duration of the flood were analysed. The inundation simulation provides detailed hazard maps and is considered a reliable basis for risk assessment and risk zone mapping. Results are regarded vital for estimation of tsunami induced damages and evacuation planning. Results of the aforementioned simulations will be discussed during the conference. Differences of the numerical results using topographic data of different scales and modified by different post processing techniques will be analysed and explained. Further use of the results with respect to tsunami risk analysis and management will also be demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandini, Vincent John; Duran, Felicia Angelica; Wyss, Gregory Dane
2003-09-01
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to anmore » aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.« less
Martinez-Fiestas, Myriam; Rodríguez-Garzón, Ignacio; Delgado-Padial, Antonio; Lucas-Ruiz, Valeriano
2017-09-01
This article presents a cross-cultural study on perceived risk in the construction industry. Worker samples from three different countries were studied: Spain, Peru and Nicaragua. The main goal was to explain how construction workers perceive their occupational hazard and to analyze how this is related to their national culture. The model used to measure perceived risk was the psychometric paradigm. The results show three very similar profiles, indicating that risk perception is independent of nationality. A cultural analysis was conducted using the Hofstede model. The results of this analysis and the relation to perceived risk showed that risk perception in construction is independent of national culture. Finally, a multiple lineal regression analysis was conducted to determine what qualitative attributes could predict the global quantitative size of risk perception. All of the findings have important implications regarding the management of safety in the workplace.
Coexistence Analysis of Civil Unmanned Aircraft Systems at Low Altitudes
NASA Astrophysics Data System (ADS)
Zhou, Yuzhe
2016-11-01
The requirement of unmanned aircraft systems in civil areas is growing. However, provisioning of flight efficiency and safety of unmanned aircraft has critical requirements on wireless communication spectrum resources. Current researches mainly focus on spectrum availability. In this paper, the unmanned aircraft system communication models, including the coverage model and data rate model, and two coexistence analysis procedures, i. e. the interference and noise ratio criterion and frequency-distance-direction criterion, are proposed to analyze spectrum requirements and interference results of the civil unmanned aircraft systems at low altitudes. In addition, explicit explanations are provided. The proposed coexistence analysis criteria are applied to assess unmanned aircraft systems' uplink and downlink interference performances and to support corresponding spectrum planning. Numerical results demonstrate that the proposed assessments and analysis procedures satisfy requirements of flexible spectrum accessing and safe coexistence among multiple unmanned aircraft systems.
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
Stability analysis of free piston Stirling engines
NASA Astrophysics Data System (ADS)
Bégot, Sylvie; Layes, Guillaume; Lanzetta, François; Nika, Philippe
2013-03-01
This paper presents a stability analysis of a free piston Stirling engine. The model and the detailed calculation of pressures losses are exposed. Stability of the machine is studied by the observation of the eigenvalues of the model matrix. Model validation based on the comparison with NASA experimental results is described. The influence of operational and construction parameters on performance and stability issues is exposed. The results show that most parameters that are beneficial for machine power seem to induce irregular mechanical characteristics with load, suggesting that self-sustained oscillations could be difficult to maintain and control.
NASA Technical Reports Server (NTRS)
Gangwani, S. T.
1985-01-01
A reliable rotor aeroelastic analysis operational that correctly predicts the vibration levels for a helicopter is utilized to test various unsteady aerodynamics models with the objective of improving the correlation between test and theory. This analysis called Rotor Aeroelastic Vibration (RAVIB) computer program is based on a frequency domain forced response analysis which utilizes the transfer matrix techniques to model helicopter/rotor dynamic systems of varying degrees of complexity. The results for the AH-1G helicopter rotor were compared with the flight test data during high speed operation and they indicated a reasonably good correlation for the beamwise and chordwise blade bending moments, but for torsional moments the correlation was poor. As a result, a new aerodynamics model based on unstalled synthesized data derived from the large amplitude oscillating airfoil experiments was developed and tested.
Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sawyer, Darren Charles
1994-01-01
The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.
Information Security Analysis Using Game Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlicher, Bob G; Abercrombie, Robert K
Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less
ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G
Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less
NASA Technical Reports Server (NTRS)
Bishop, James
1995-01-01
Work on completing our analysis of the Voyager UVS solar occultation data acquired during Neptune encounter is essentially complete, as testified by the attached poster materials. The photochemical modeling addresses the recent revision in branching ratios for radical production in the photolysis of methane at H Lyman alpha implied by the lab measurements of Mordaunt et al. (1993). The software generated in this effort has been useful for checking the degree to which photochemical models addressing other datasets (mainly infrared) are consistent with the UVS data. This work complements the UVS modeling results in that the IR data refer to deeper pressure levels; as regards the modeling of UVS data, the most significant result is the convincing support for the presence of a stagnant lower stratosphere. Evidence for strong dynamical (mixing) transport of minor constituents at shallower pressures is provided by the UVS data analysis.
Analysis of the connection of the timber-fiber concrete composite structure
NASA Astrophysics Data System (ADS)
Holý, Milan; Vráblík, Lukáš; Petřík, Vojtěch
2017-09-01
This paper deals with an implementation of the material parameters of the connection to complex models for analysis of the timber-fiber concrete composite structures. The aim of this article is to present a possible way of idealization of the continuous contact model that approximates the actual behavior of timber-fiber reinforced concrete structures. The presented model of the connection was derived from push-out shear tests. It was approved by use of the nonlinear numerical analysis, that it can be achieved a very good compliance between results of numerical simulations and results of the experiments by a suitable choice of the material parameters of the continuous contact. Finally, an application for an analytical calculation of timber-fiber concrete composite structures is developed for the practical use in engineering praxis. The input material parameters for the analytical model was received using data from experiments.
NASA Astrophysics Data System (ADS)
Kaźmierczak, Bartosz; Wartalska, Katarzyna; Wdowikowski, Marcin; Kotowski, Andrzej
2017-11-01
Modern scientific research in the area of heavy rainfall analysis regarding to the sewerage design indicates the need to develop and use probabilistic rain models. One of the issues that remains to be resolved is the length of the shortest amount of rain to be analyzed. It is commonly believed that the best time is 5 minutes, while the least rain duration measured by the national services is often 10 or even 15 minutes. Main aim of this paper is to present the difference between probabilistic rainfall models results given from rainfall time series including and excluding 5 minutes rainfall duration. Analysis were made for long-time period from 1961-2010 on polish meteorological station Legnica. To develop best fitted to measurement rainfall data probabilistic model 4 probabilistic distributions were used. Results clearly indicates that models including 5 minutes rainfall duration remains more appropriate to use.
Signal analysis of accelerometry data using gravity-based modeling
NASA Astrophysics Data System (ADS)
Davey, Neil P.; James, Daniel A.; Anderson, Megan E.
2004-03-01
Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.
Monakhova, Yulia B; Mushtakova, Svetlana P
2017-05-01
A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.
Lattice Boltzmann methods for global linear instability analysis
NASA Astrophysics Data System (ADS)
Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis
2017-12-01
Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Model Identification in Time-Series Analysis: Some Empirical Results.
ERIC Educational Resources Information Center
Padia, William L.
Model identification of time-series data is essential to valid statistical tests of intervention effects. Model identification is, at best, inexact in the social and behavioral sciences where one is often confronted with small numbers of observations. These problems are discussed, and the results of independent identifications of 130 social and…
A simplified solar cell array modelling program
NASA Technical Reports Server (NTRS)
Hughes, R. D.
1982-01-01
As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.
Analysis of an electrohydraulic aircraft control surface servo and comparison with test results
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1972-01-01
An analysis of an electrohydraulic aircraft control-surface system is made in which the system is modeled as a lumped, two-mass, spring-coupled system controlled by a servo valve. Both linear and nonlinear models are developed, and the effects of hinge-moment loading are included. Transfer functions of the system and approximate literal factors of the transfer functions for several cases are presented. The damping action of dynamic pressure feedback is analyzed. Comparisons of the model responses with results from tests made on a highly resonant rudder control-surface servo indicate the adequacy of the model. The effects of variations in hinge-moment loading are illustrated.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Reduced-order modeling approach for frictional stick-slip behaviors of joint interface
NASA Astrophysics Data System (ADS)
Wang, Dong; Xu, Chao; Fan, Xuanhua; Wan, Qiang
2018-03-01
The complex frictional stick-slip behaviors of mechanical joint interface have a great effect on the dynamic properties of assembled structures. In this paper, a reduced-order modeling approach based on the constitutive Iwan model is proposed to describe the stick-slip behaviors of joint interface. An improved Iwan model is developed to describe the non-zero residual stiffness at macro-slip regime and smooth transition of joint stiffness from micro-slip to macro-slip regime, and the power-law relationship of energy dissipation during the micro-slip regime. In allusion to these nonlinear behaviors, the finite element method is used to calculate the recycle force under monolithic loading and the energy dissipation per cycle under oscillatory loading. The proposed model is then used to predict the nonlinear stick-slip behaviors of joint interface by curve-fitting to the results of finite element analysis, and the results show good agreements with the finite element analysis. A comparison with the experiment results in literature is also made. The proposed model agrees very well with the experiment results.
Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick
2013-01-01
Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412
Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.
2001-01-01
A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Zagidulin, Dmitri; Rauser, Richard W.
2000-01-01
Capabilities and expertise related to the development of links between nondestructive evaluation (NDE) and finite element analysis (FEA) at Glenn Research Center (GRC) are demonstrated. Current tools to analyze data produced by computed tomography (CT) scans are exercised to help assess the damage state in high temperature structural composite materials. A utility translator was written to convert velocity (an image processing software) STL data file to a suitable CAD-FEA type file. Finite element analyses are carried out with MARC, a commercial nonlinear finite element code, and the analytical results are discussed. Modeling was established by building MSC/Patran (a pre and post processing finite element package) generated model and comparing it to a model generated by Velocity in conjunction with MSC/Patran Graphics. Modeling issues and results are discussed in this paper. The entire process that outlines the tie between the data extracted via NDE and the finite element modeling and analysis is fully described.
Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data
Agnese, R.
2015-03-30
We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from Pb210decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we also perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in ourmore » data. Finally, we confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.« less
Perotte, Adler; Ranganath, Rajesh; Hirsch, Jamie S; Blei, David; Elhadad, Noémie
2015-07-01
As adoption of electronic health records continues to increase, there is an opportunity to incorporate clinical documentation as well as laboratory values and demographics into risk prediction modeling. The authors develop a risk prediction model for chronic kidney disease (CKD) progression from stage III to stage IV that includes longitudinal data and features drawn from clinical documentation. The study cohort consisted of 2908 primary-care clinic patients who had at least three visits prior to January 1, 2013 and developed CKD stage III during their documented history. Development and validation cohorts were randomly selected from this cohort and the study datasets included longitudinal inpatient and outpatient data from these populations. Time series analysis (Kalman filter) and survival analysis (Cox proportional hazards) were combined to produce a range of risk models. These models were evaluated using concordance, a discriminatory statistic. A risk model incorporating longitudinal data on clinical documentation and laboratory test results (concordance 0.849) predicts progression from state III CKD to stage IV CKD more accurately when compared to a similar model without laboratory test results (concordance 0.733, P<.001), a model that only considers the most recent laboratory test results (concordance 0.819, P < .031) and a model based on estimated glomerular filtration rate (concordance 0.779, P < .001). A risk prediction model that takes longitudinal laboratory test results and clinical documentation into consideration can predict CKD progression from stage III to stage IV more accurately than three models that do not take all of these variables into consideration. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Ambiguities in model-independent partial-wave analysis
NASA Astrophysics Data System (ADS)
Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.
2018-06-01
Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.
Analysis of the Three-Dimensional Vector FAÇADE Model Created from Photogrammetric Data
NASA Astrophysics Data System (ADS)
Kamnev, I. S.; Seredovich, V. A.
2017-12-01
The results of the accuracy assessment analysis for creation of a three-dimensional vector model of building façade are described. In the framework of the analysis, analytical comparison of three-dimensional vector façade models created by photogrammetric and terrestrial laser scanning data has been done. The three-dimensional model built from TLS point clouds was taken as the reference one. In the course of the experiment, the three-dimensional model to be analyzed was superimposed on the reference one, the coordinates were measured and deviations between the same model points were determined. The accuracy estimation of the three-dimensional model obtained by using non-metric digital camera images was carried out. Identified façade surface areas with the maximum deviations were revealed.
MAP stability, design, and analysis
NASA Technical Reports Server (NTRS)
Ericsson-Jackson, A. J.; Andrews, S. F.; O'Donnell, J. R., Jr.; Markley, F. L.
1998-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The design and analysis of the MAP attitude control system (ACS) have been refined since work previously reported. The full spacecraft and instrument flexible model was developed in NASTRAN, and the resulting flexible modes were plotted and reduced with the Modal Significance Analysis Package (MSAP). The reduced-order model was used to perform the linear stability analysis for each control mode, the results of which are presented in this paper. Although MAP is going to a relatively disturbance-free Lissajous orbit around the Earth-Sun L(2) Lagrange point, a detailed disturbance-torque analysis is required because there are only a small number of opportunities for momentum unloading each year. Environmental torques, including solar pressure at L(2), aerodynamic and gravity gradient during phasing-loop orbits, were calculated and simulated. Thruster plume impingement torques that could affect the performance of the thruster modes were estimated and simulated, and a simple model of fuel slosh was derived to model its effect on the motion of the spacecraft. In addition, a thruster mode linear impulse controller was developed to meet the accuracy requirements of the phasing loop burns. A dynamic attitude error limiter was added to improve the performance of the ACS during large attitude slews. The result of this analysis is a stable ACS subsystem that meets all of the mission's requirements.
Interpretable inference on the mixed effect model with the Box-Cox transformation.
Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M
2017-07-10
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.
NASA Technical Reports Server (NTRS)
Min, J. B.; Reddy, T. S. R.; Bakhle, M. A.; Coroneos, R. M.; Stefko, G. L.; Provenza, A. J.; Duffy, K. P.
2018-01-01
Accurate prediction of the blade vibration stress is required to determine overall durability of fan blade design under Boundary Layer Ingestion (BLI) distorted flow environments. Traditional single blade modeling technique is incapable of representing accurate modeling for the entire rotor blade system subject to complex dynamic loading behaviors and vibrations in distorted flow conditions. A particular objective of our work was to develop a high-fidelity full-rotor aeromechanics analysis capability for a system subjected to a distorted inlet flow by applying cyclic symmetry finite element modeling methodology. This reduction modeling method allows computationally very efficient analysis using a small periodic section of the full rotor blade system. Experimental testing by the use of the 8-foot by 6-foot Supersonic Wind Tunnel Test facility at NASA Glenn Research Center was also carried out for the system designated as the Boundary Layer Ingesting Inlet/Distortion-Tolerant Fan (BLI2DTF) technology development. The results obtained from the present numerical modeling technique were evaluated with those of the wind tunnel experimental test, toward establishing a computationally efficient aeromechanics analysis modeling tool facilitating for analyses of the full rotor blade systems subjected to a distorted inlet flow conditions. Fairly good correlations were achieved hence our computational modeling techniques were fully demonstrated. The analysis result showed that the safety margin requirement set in the BLI2DTF fan blade design provided a sufficient margin with respect to the operating speed range.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
Residual Stress Analysis in Welded Component.
NASA Astrophysics Data System (ADS)
Rouhi, Shahab; Yoshida, Sanichiro; Miura, Fumiya; Sasaki, Tomohiro
Due to local heating, thermal stresses occur during welding; and residual stress and distortion result remain welding. Welding distortion has negative effects on the accuracy of assembly, exterior appearance, and various strengths of the welded structures. Up to date, a lot of experiments and numerical analysis have been developed to assess residual stress. However, quantitative estimation of residual stress based on experiment may involve massive uncertainties and complexity of the measurement process. To comprehensively understand this phenomena, it is necessary to do further researches by means of both experiment and numerical simulation. In this research, we conduct Finite Element Analysis (FEA) for a simple butt-welded metal plate specimen. Thermal input and resultant expansion are modeled with a thermal expansion FEA module and the resultant constitutive response of the material is modeled with a continuous mechanic FEA module. The residual stress is modeled based on permanent deformation occurring during the heating phase of the material. Experiments have also been carried out to compare with the FEA results. Numerical and experimental results show qualitative agreement. The present work was supported by the Louisiana Board of Regents (LEQSF(2016-17)-RD-C-13).
Analysis of transient state in HTS tapes under ripple DC load current
NASA Astrophysics Data System (ADS)
Stepien, M.; Grzesik, B.
2014-05-01
The paper concerns the analysis of transient state (quench transition) in HTS tapes loaded with the current having DC component together with a ripple component. Two shapes of the ripple were taken into account: sinusoidal and triangular. Very often HTS tape connected to a power electronic current supply (i.e. superconducting coil for SMES) that delivers DC current with ripples and it needs to be examined under such conditions. Additionally, measurements of electrical (and thermal) parameters under such ripple excitation is useful to tape characterization in broad range of load currents. The results presented in the paper were obtained using test bench which contains programmable DC supply and National Instruments data acquisition system. Voltage drops and load currents were measured vs. time. Analysis of measured parameters as a function of the current was used to tape description with quench dynamics taken into account. Results of measurements were also used to comparison with the results of numerical modelling based on FEM. Presented provisional results show possibility to use results of measurements in transient state to prepare inverse models of superconductors and their detailed numerical modelling.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Park, Young Il
2016-01-01
BACKGROUND/OBJECTIVES This research analyzes the effects of the food choices of industrial workers according to their sugar intake pattern on their job satisfaction through the construction of a model on the relationship between sugar intake pattern and job satisfaction. SUBJECTS/METHODS Surveys were collected from May to July 2015. A statistical analysis of the 775 surveys from Kyungsangnam-do was conducted using SPSS13.0 for Windows and SEM was performed using the AMOS 5.0 statistics package. RESULTS The reliability of the data was confirmed by an exploratory factor analysis through a Cronbach's alpha coefficient, and the measurement model was proven to be appropriate by a confirmatory factor analysis in conjunction with AMOS. The results of factor analysis on food choice, sugar intake pattern and job satisfaction were categorized into five categories. The reliability of these findings was supported by a Cronbach's alpha coefficient of 0.6 and higher for all factors except confection (0.516) and dairy products (0.570). The multicollinearity results did not indicate a problem between the variables since the highest correlation coefficient was 0.494 (P < 0.01). In an attempt to study the sugar intake pattern in accordance with the food choices and job satisfaction of industrial workers, a structural equation model was constructed and analyzed. CONCLUSIONS All tests confirmed that the model satisfied the recommended levels for the goodness of fit index, and thus, the overall research model was proven to be appropriate. PMID:27478555
NASA Technical Reports Server (NTRS)
Reveley, Mary S.
2003-01-01
The goal of the NASA Aviation Safety Program (AvSP) is to develop and demonstrate technologies that contribute to a reduction in the aviation fatal accident rate by a factor of 5 by the year 2007 and by a factor of 10 by the year 2022. Integrated safety analysis of day-to-day operations and risks within those operations will provide an understanding of the Aviation Safety Program portfolio. Safety benefits analyses are currently being conducted. Preliminary results for the Synthetic Vision Systems (SVS) and Weather Accident Prevention (WxAP) projects of the AvSP have been completed by the Logistics Management Institute under a contract with the NASA Glenn Research Center. These analyses include both a reliability analysis and a computer simulation model. The integrated safety analysis method comprises two principal components: a reliability model and a simulation model. In the reliability model, the results indicate how different technologies and systems will perform in normal, degraded, and failed modes of operation. In the simulation, an operational scenario is modeled. The primary purpose of the SVS project is to improve safety by providing visual-flightlike situation awareness during instrument conditions. The current analyses are an estimate of the benefits of SVS in avoiding controlled flight into terrain. The scenario modeled has an aircraft flying directly toward a terrain feature. When the flight crew determines that the aircraft is headed toward an obstruction, the aircraft executes a level turn at speed. The simulation is ended when the aircraft completes the turn.
Application of structured analysis to a telerobotic system
NASA Technical Reports Server (NTRS)
Dashman, Eric; Mclin, David; Harrison, F. W.; Soloway, Donald; Young, Steven
1990-01-01
The analysis and evaluation of a multiple arm telerobotic research and demonstration system developed by the NASA Intelligent Systems Research Laboratory (ISRL) is described. Structured analysis techniques were used to develop a detailed requirements model of an existing telerobotic testbed. Performance models generated during this process were used to further evaluate the total system. A commercial CASE tool called Teamwork was used to carry out the structured analysis and development of the functional requirements model. A structured analysis and design process using the ISRL telerobotic system as a model is described. Evaluation of this system focused on the identification of bottlenecks in this implementation. The results demonstrate that the use of structured methods and analysis tools can give useful performance information early in a design cycle. This information can be used to ensure that the proposed system meets its design requirements before it is built.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians
NASA Astrophysics Data System (ADS)
Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von
2008-03-01
Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Hoisington, Charles M.; Gregg, Watson W.; Coronado, Patrick L.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)
1993-01-01
An analysis of orbit propagation models was performed by the Mission Operations element of the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Project, which has overall responsibility for the instrument scheduling. The orbit propagators selected for this analysis are widely available general perturbations models. The analysis includes both absolute accuracy determination and comparisons of different versions of the models. The results show that all of the models tested meet accuracy requirements for scheduling and data acquisition purposes. For internal Project use the SGP4 propagator, developed by the North American Air Defense (NORAD) Command, has been selected. This model includes atmospheric drag effects and, therefore, provides better accuracy. For High Resolution Picture Transmission (HRPT) ground stations, which have less stringent accuracy requirements, the publicly available Brouwer-Lyddane models are recommended. The SeaWiFS Project will make available portable source code for a version of this model developed by the Data Capture Facility (DCF).
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
Identification of aerodynamic models for maneuvering aircraft
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Hu, C. C.
1992-01-01
The method based on Fourier functional analysis and indicial formulation for aerodynamic modeling as proposed by Chin and Lan is extensively examined and improved for the purpose of general applications to realistic airplane configurations. Improvement is made to automate the calculation of model coefficients, and to evaluate more accurately the indicial integral. Test data of large angle-of-attack ranges for two different models, a 70 deg. delta wing and an F-18 model, are used to further verify the applicability of Fourier functional analysis and validate the indicial formulation. The results show that the general expression for harmonic motions throughout a range of k is capable of accurately modeling the nonlinear responses with large phase lag except in the region where an inconsistent hysteresis behavior from one frequency to the other occurs. The results by the indicial formulation indicate that more accurate results can be obtained when the motion starts from a low angle of attack where hysteresis effect is not important.
NASA Astrophysics Data System (ADS)
De Lucia, Frank C., Jr.; Gottfried, Jennifer L.
2011-02-01
Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2008-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.
NASA Technical Reports Server (NTRS)
Andrews, E. H., Jr.; Mackley, E. A.
1976-01-01
An aerodynamic engine inlet analysis was performed on the experimental results obtained at nominal Mach numbers of 5, 6, and 7 from the NASA Hypersonic Research Engine (HRE) Aerothermodynamic Integration Model (AIM). Incorporation on the AIM of the mixed-compression inlet design represented the final phase of an inlet development program of the HRE Project. The purpose of this analysis was to compare the AIM inlet experimental results with theoretical results. Experimental performance was based on measured surface pressures used in a one-dimensional force-momentum theorem. Results of the analysis indicate that surface static-pressure measurements agree reasonably well with theoretical predictions except in the regions where the theory predicts large pressure discontinuities. Experimental and theoretical results both based on the one-dimensional force-momentum theorem yielded inlet performance parameters as functions of Mach number that exhibited reasonable agreement. Previous predictions of inlet unstart that resulted from pressure disturbances created by fuel injection and combustion appeared to be pessimistic.
NASA Astrophysics Data System (ADS)
Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.
2017-11-01
The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.
NASA Astrophysics Data System (ADS)
Faruk, Alfensi
2018-03-01
Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.
Finite Elements Analysis of a Composite Semi-Span Test Article With and Without Discrete Damage
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)
2000-01-01
AS&M Inc. performed finite element analysis, with and without discrete damage, of a composite semi-span test article that represents the Boeing 220-passenger transport aircraft composite semi-span test article. A NASTRAN bulk data file and drawings of the test mount fixtures and semi-span components were utilized to generate the baseline finite element model. In this model, the stringer blades are represented by shell elements, and the stringer flanges are combined with the skin. Numerous modeling modifications and discrete source damage scenarios were applied to the test article model throughout the course of the study. This report details the analysis method and results obtained from the composite semi-span study. Analyses were carried out for three load cases: Braked Roll, LOG Down-Bending and 2.5G Up-Bending. These analyses included linear and nonlinear static response, as well as linear and nonlinear buckling response. Results are presented in the form of stress and strain plots. factors of safety for failed elements, buckling loads and modes, deflection prediction tables and plots, and strainage prediction tables and plots. The collected results are presented within this report for comparison to test results.
NASA Astrophysics Data System (ADS)
Trostyansky, S. N.; Kalach, A. V.; Lavlinsky, V. V.; Lankin, O. V.
2018-03-01
Based on the analysis of the dynamic model of panel data by region, including fire statistics for surveillance sites and statistics of a set of regional socio-economic indicators, as well as the time of rapid response of the state fire service to fires, the probability of fires in the surveillance sites and the risk of human death in The result of such fires from the values of the corresponding indicators for the previous year, a set of regional social-economics factors, as well as regional indicators time rapid response of the state fire service in the fire. The results obtained are consistent with the results of the application to the fire risks of the model of a rational offender. Estimation of the economic equivalent of human life from data on surveillance objects for Russia, calculated on the basis of the analysis of the presented dynamic model of fire risks, correctly agrees with the known literary data. The results obtained on the basis of the econometric approach to fire risks allow us to forecast fire risks at the supervisory sites in the regions of Russia and to develop management solutions to minimize such risks.
pcr: an R package for quality assessment, analysis and testing of qPCR data
Ahmed, Mahmoud
2018-01-01
Background Real-time quantitative PCR (qPCR) is a broadly used technique in the biomedical research. Currently, few different analysis models are used to determine the quality of data and to quantify the mRNA level across the experimental conditions. Methods We developed an R package to implement methods for quality assessment, analysis and testing qPCR data for statistical significance. Double Delta CT and standard curve models were implemented to quantify the relative expression of target genes from CT in standard qPCR control-group experiments. In addition, calculation of amplification efficiency and curves from serial dilution qPCR experiments are used to assess the quality of the data. Finally, two-group testing and linear models were used to test for significance of the difference in expression control groups and conditions of interest. Results Using two datasets from qPCR experiments, we applied different quality assessment, analysis and statistical testing in the pcr package and compared the results to the original published articles. The final relative expression values from the different models, as well as the intermediary outputs, were checked against the expected results in the original papers and were found to be accurate and reliable. Conclusion The pcr package provides an intuitive and unified interface for its main functions to allow biologist to perform all necessary steps of qPCR analysis and produce graphs in a uniform way. PMID:29576953
Closed Loop Software Control of the MIDEX Power System
NASA Technical Reports Server (NTRS)
Castell, Karen; Hernandez-Pellerano, Amri; Wismer, Margaret
1998-01-01
The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The design and analysis of the MAP attitude control system (ACS) have been refined since work previously reported. The full spacecraft and instrument flexible model was developed in NASTRAN, and the resulting flexible modes were plotted and reduced with the Modal Significance Analysis Package (MSAP). The reduced-order model was used to perform the linear stability analysis for each control mode, the results of which are presented in this paper. Although MAP is going to a relatively disturbance-free Lissajous orbit around the Earth-Sun L2 Lagrange point, a detailed disturbance-torque analysis is required because there are only a small number of opportunities for momentum unloading each year. Environmental torques, including solar pressure at L2, and aerodynamic and gravity gradient during phasing-loop orbits, were calculated and simulated. A simple model of fuel slosh was derived to model its effect on the motion of the spacecraft. In addition, a thruster mode linear impulse controller was developed to meet the accuracy requirements of the phasing loop burns. A dynamic attitude error limiter was added to improve the performance of the ACS during large attitude slews. The result of this analysis is a stable ACS subsystem that meets all of the mission's requirements.
Identification of Historical Veziragasi Aqueduct Using the Operational Modal Analysis
Ercan, E.; Nuhoglu, A.
2014-01-01
This paper describes the results of a model updating study conducted on a historical aqueduct, called Veziragasi, in Turkey. The output-only modal identification results obtained from ambient vibration measurements of the structure were used to update a finite element model of the structure. For the purposes of developing a solid model of the structure, the dimensions of the structure, defects, and material degradations in the structure were determined in detail by making a measurement survey. For evaluation of the material properties of the structure, nondestructive and destructive testing methods were applied. The modal analysis of the structure was calculated by FEM. Then, a nondestructive dynamic test as well as operational modal analysis was carried out and dynamic properties were extracted. The natural frequencies and corresponding mode shapes were determined from both theoretical and experimental modal analyses and compared with each other. A good harmony was attained between mode shapes, but there were some differences between natural frequencies. The sources of the differences were introduced and the FEM model was updated by changing material parameters and boundary conditions. Finally, the real analytical model of the aqueduct was put forward and the results were discussed. PMID:24511287
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.
2014-01-01
In adsorption study, to describe sorption process and evaluation of best-fitting isotherm model is a key analysis to investigate the theoretical hypothesis. Hence, numerous statistically analysis have been extensively used to estimate validity of the experimental equilibrium adsorption values with the predicted equilibrium values. Several statistical error analysis were carried out. In the present study, the following statistical analysis were carried out to evaluate the adsorption isotherm model fitness, like the Pearson correlation, the coefficient of determination and the Chi-square test, have been used. The ANOVA test was carried out for evaluating significance of various error functions and also coefficient of dispersion were evaluated for linearised and non-linearised models. The adsorption of phenol onto natural soil (Local name Kalathur soil) was carried out, in batch mode at 30 ± 20 C. For estimating the isotherm parameters, to get a holistic view of the analysis the models were compared between linear and non-linear isotherm models. The result reveled that, among above mentioned error functions and statistical functions were designed to determine the best fitting isotherm. PMID:25018878
The GLOBE Contrail Protocol: Initial Analysis of Results
NASA Technical Reports Server (NTRS)
Chambers, Lin; Duda, David
2004-01-01
The GLOBE contrail protocol was launched in March 2003 to obtain surface observer reports of contrail occurrence to complement satellite and model studies underway at NASA Langley, among others. During the first year, more than 30,000 ground observations of contrails were submitted to GLOBE. An initial analysis comparing the GLOBE observations to weather prediction model results for relative humidity at flight altitudes is in progress. This paper reports on the findings to date from this effort.
Assessment of Managed Aquifer Recharge Site Suitability Using a GIS and Modeling.
Russo, Tess A; Fisher, Andrew T; Lockwood, Brian S
2015-01-01
We completed a two-step regional analysis of a coastal groundwater basin to (1) assess regional suitability for managed aquifer recharge (MAR), and (2) quantify the relative impact of MAR activities on groundwater levels and sea water intrusion. The first step comprised an analysis of surface and subsurface hydrologic properties and conditions, using a geographic information system (GIS). Surface and subsurface data coverages were compiled, georeferenced, reclassified, and integrated (including novel approaches for combining related datasets) to derive a spatial distribution of MAR suitability values. In the second step, results from the GIS analysis were used with a regional groundwater model to assess the hydrologic impact of potential MAR placement and operating scenarios. For the region evaluated in this study, the Pajaro Valley Groundwater Basin, California, GIS results suggest that about 7% (15 km2) of the basin may be highly suitable for MAR. Modeling suggests that simulated MAR projects placed near the coast help to reduce sea water intrusion more rapidly, but these projects also result in increased groundwater flows to the ocean. In contrast, projects placed farther inland result in more long-term reduction in sea water intrusion and less groundwater flowing to the ocean. This work shows how combined GIS analysis and modeling can assist with regional water supply planning, including evaluation of options for enhancing groundwater resources. © 2014, National Ground Water Association.
Holmquist-Johnson, C. L.
2009-01-01
River spanning rock structures are being constructed for water delivery as well as to enable fish passage at barriers and provide or improve the aquatic habitat for endangered fish species. Current design methods are based upon anecdotal information applicable to a narrow range of channel conditions. The complex flow patterns and performance of rock weirs is not well understood. Without accurate understanding of their hydraulics, designers cannot address the failure mechanisms of these structures. Flow characteristics such as jets, near bed velocities, recirculation, eddies, and plunging flow govern scour pool development. These detailed flow patterns can be replicated using a 3D numerical model. Numerical studies inexpensively simulate a large number of cases resulting in an increased range of applicability in order to develop design tools and predictive capability for analysis and design. The analysis and results of the numerical modeling, laboratory modeling, and field data provide a process-based method for understanding how structure geometry affects flow characteristics, scour development, fish passage, water delivery, and overall structure stability. Results of the numerical modeling allow designers to utilize results of the analysis to determine the appropriate geometry for generating desirable flow parameters. The end product of this research will develop tools and guidelines for more robust structure design or retrofits based upon predictable engineering and hydraulic performance criteria. ?? 2009 ASCE.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Static analysis of a sonar dome rubber window
NASA Technical Reports Server (NTRS)
Lai, J. L.
1978-01-01
The application of NASTRAN (level 16.0.1) to the static analysis of a sonar dome rubber window (SDRW) was demonstrated. The assessment of the conventional model (neglecting the enclosed fluid) for the stress analysis of the SDRW was made by comparing its results to those based on a sophisticated model (including the enclosed fluid). The fluid was modeled with isoparametric linear hexahedron elements with approximate material properties whose shear modulus was much smaller than its bulk modulus. The effect of the chosen material property for the fluid is discussed.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Study on the CO2 electric driven fixed swash plate type compressor for eco-friendly vehicles
NASA Astrophysics Data System (ADS)
Nam, Donglim; Kim, Kitae; Lee, Jehie; Kwon, Yunki; Lee, Geonho
2017-08-01
The purpose of this study is to experiment and to performance analysis about the electric-driven fixed swash plate compressor using alternate refrigerant(R744). Comprehensive simulation model for an electric driven compressor using CO2 for eco-friendly vehicle is presented. This model consists of compression model and dynamic model. The compression model included valve dynamics, leakage, and heat transfer models. And the dynamic model included frictional loss between piston ring and cylinder wall, frictional loss between shoe and swash plate, frictional loss of bearings, and electric efficiency. Especially, because the efficiency of an electric parts(motor and inverter) in the compressor affects the loss of the compressor, the dynamo test was performed. We made the designed compressor, and tested the performance of the compressor about the variety pressure conditions. Also we compared the performance analysis result and performance test result.
Carmichael, Marc G; Liu, Dikai
2015-01-01
Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.
NASA Astrophysics Data System (ADS)
Rostami, Ali Bakhshandeh; Fernandes, Antonio Carlos
2018-03-01
This paper is dedicated to develop a mathematical model that can simulate nonlinear phenomena of a hinged plate which places into the fluid flow (1 DOF). These phenomena are fluttering (oscillation motion), autorotation (continuous rotation) and chaotic motion (combination of fluttering and autorotation). Two mathematical models are developed for 1 DOF problem using two eminent mathematical models which had been proposed for falling plates (3 DOF). The procedures of developing these models are elaborated and then these results are compared to experimental data. The best model in the simulation of the phenomena is chosen for stability and bifurcation analysis. Based on these analyses, this model shows a transcritical bifurcation and as a result, the stability diagram and threshold are presented. Moreover, an analytical expression is given for finding the boundary of bifurcation from the fluttering to the autorotation.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
NASA Technical Reports Server (NTRS)
Levison, W. H.; Baron, S.
1984-01-01
Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.
Puttarajappa, Chethan; Wijkstrom, Martin; Ganoza, Armando; Lopez, Roberto; Tevar, Amit
2018-01-01
Background Recent studies have reported a significant decrease in wound problems and hospital stay in obese patients undergoing renal transplantation by robotic-assisted minimally invasive techniques with no difference in graft function. Objective Due to the lack of cost-benefit studies on the use of robotic-assisted renal transplantation versus open surgical procedure, the primary aim of our study is to develop a Markov model to analyze the cost-benefit of robotic surgery versus open traditional surgery in obese patients in need of a renal transplant. Methods Electronic searches will be conducted to identify studies comparing open renal transplantation versus robotic-assisted renal transplantation. Costs associated with the two surgical techniques will incorporate the expenses of the resources used for the operations. A decision analysis model will be developed to simulate a randomized controlled trial comparing three interventional arms: (1) continuation of renal replacement therapy for patients who are considered non-suitable candidates for renal transplantation due to obesity, (2) transplant recipients undergoing open transplant surgery, and (3) transplant patients undergoing robotic-assisted renal transplantation. TreeAge Pro 2017 R1 TreeAge Software, Williamstown, MA, USA) will be used to create a Markov model and microsimulation will be used to compare costs and benefits for the two competing surgical interventions. Results The model will simulate a randomized controlled trial of adult obese patients affected by end-stage renal disease undergoing renal transplantation. The absorbing state of the model will be patients' death from any cause. By choosing death as the absorbing state, we will be able simulate the population of renal transplant recipients from the day of their randomization to transplant surgery or continuation on renal replacement therapy to their death and perform sensitivity analysis around patients' age at the time of randomization to determine if age is a critical variable for cost-benefit analysis or cost-effectiveness analysis comparing renal replacement therapy, robotic-assisted surgery or open renal transplant surgery. After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. Conclusions After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. PMID:29519780
Graph configuration model based evaluation of the education-occupation match
2018-01-01
To study education—occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education. PMID:29509783
Graph configuration model based evaluation of the education-occupation match.
Gadar, Laszlo; Abonyi, Janos
2018-01-01
To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.
NASA Astrophysics Data System (ADS)
da Silva, Roberto; Vainstein, Mendeli H.; Gonçalves, Sebastián; Paula, Felipe S. F.
2013-08-01
Statistics of soccer tournament scores based on the double round robin system of several countries are studied. Exploring the dynamics of team scoring during tournament seasons from recent years we find evidences of superdiffusion. A mean-field analysis results in a drift velocity equal to that of real data but in a different diffusion coefficient. Along with the analysis of real data we present the results of simulations of soccer tournaments obtained by an agent-based model which successfully describes the final scoring distribution [da Silva , Comput. Phys. Commun.CPHCBZ0010-465510.1016/j.cpc.2012.10.030 184, 661 (2013)]. Such model yields random walks of scores over time with the same anomalous diffusion as observed in real data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuypers, Marshall A.; Lambert, Gregory Joseph; Moore, Thomas W.
Chronic infection with Hepatitis C virus (HCV) results in cirrhosis, liver cancer and death. As the nations largest provider of care for HCV, US Veterans Health Administration (VHA) invests extensive resources in the diagnosis and treatment of the disease. This report documents modeling and analysis of HCV treatment dynamics performed for the VHA aimed at improving service delivery efficiency. System dynamics modeling of disease treatment demonstrated the benefits of early detection and the role of comorbidities in disease progress and patient mortality. Preliminary modeling showed that adherence to rigorous treatment protocols is a primary determinant of treatment success. In depthmore » meta-analysis revealed correlations of adherence and various psycho-social factors. This initial meta-analysis indicates areas where substantial improvement in patient outcomes can potentially result from VA programs which incorporate these factors into their design.« less
Swarming behaviors in multi-agent systems with nonlinear dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Wenwu, E-mail: wenwuyu@gmail.com; School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001; Chen, Guanrong
2013-12-15
The dynamic analysis of a continuous-time multi-agent swarm model with nonlinear profiles is investigated in this paper. It is shown that, under mild conditions, all agents in a swarm can reach cohesion within a finite time, where the upper bounds of the cohesion are derived in terms of the parameters of the swarm model. The results are then generalized by considering stochastic noise and switching between nonlinear profiles. Furthermore, swarm models with limited sensing range inducing changing communication topologies and unbounded repulsive interactions between agents are studied by switching system and nonsmooth analysis. Here, the sensing range of each agentmore » is limited and the possibility of collision among nearby agents is high. Finally, simulation results are presented to demonstrate the validity of the theoretical analysis.« less
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
Concept analysis of moral courage in nursing: A hybrid model.
Sadooghiasl, Afsaneh; Parvizy, Soroor; Ebadi, Abbas
2018-02-01
Moral courage is one of the most fundamental virtues in the nursing profession, however, little attention has been paid to it. As a result, no exact and clear definition of moral courage has ever been accessible. This study is carried out for the purposes of defining and clarifying its concept in the nursing profession. This study used a hybrid model of concept analysis comprising three phases, namely, a theoretical phase, field work phase, and a final analysis phase. To find relevant literature, electronic search of valid databases was utilized using keywords related to the concept of courage. Field work data were collected over an 11 months' time period from 2013 to 2014. In the field work phase, in-depth interviews were performed with 10 nurses. The conventional content analysis was used in two theoretical and field work phases using Graneheim and Lundman stages, and the results were combined in the final analysis phase. Ethical consideration: Permission for this study was obtained from the ethics committee of Tehran University of Medical Sciences. Oral and written informed consent was received from the participants. From the sum of 750 gained titles in theoretical phase, 26 texts were analyzed. The analysis resulted in 494 codes in text analysis and 226 codes in interview analysis. The literature review in the theoretical phase revealed two features of inherent-transcendental characteristics, two of which possessed a difficult nature. Working in the field phase added moral self-actualization characteristic, rationalism, spiritual beliefs, and scientific-professional qualifications to the feature of the concept. Moral courage is a pure and prominent characteristic of human beings. The antecedents of moral courage include model orientation, model acceptance, rationalism, individual excellence, acquiring academic and professional qualification, spiritual beliefs, organizational support, organizational repression, and internal and external personal barriers. Professional excellence resulting from moral courage can be crystallized in the form of provision of professional care, creating peace of mind, and the nurse's decision making and proper functioning.
Kuswandi, Bambang; Putri, Fitra Karima; Gani, Agus Abdul; Ahmad, Musa
2015-12-01
The use of chemometrics to analyse infrared spectra to predict pork adulteration in the beef jerky (dendeng) was explored. In the first step, the analysis of pork in the beef jerky formulation was conducted by blending the beef jerky with pork at 5-80 % levels. Then, they were powdered and classified into training set and test set. The second step, the spectra of the two sets was recorded by Fourier Transform Infrared (FTIR) spectroscopy using atenuated total reflection (ATR) cell on the basis of spectral data at frequency region 4000-700 cm(-1). The spectra was categorised into four data sets, i.e. (a) spectra in the whole region as data set 1; (b) spectra in the fingerprint region (1500-600 cm(-1)) as data set 2; (c) spectra in the whole region with treatment as data set 3; and (d) spectra in the fingerprint region with treatment as data set 4. The third step, the chemometric analysis were employed using three class-modelling techniques (i.e. LDA, SIMCA, and SVM) toward the data sets. Finally, the best result of the models towards the data sets on the adulteration analysis of the samples were selected and the best model was compared with the ELISA method. From the chemometric results, the LDA model on the data set 1 was found to be the best model, since it could classify and predict 100 % accuracy of the sample tested. The LDA model was applied toward the real samples of the beef jerky marketed in Jember, and the results showed that the LDA model developed was in good agreement with the ELISA method.
NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2010-01-01
Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.
Simulation for analysis and control of superplastic forming. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; Aramayo, G.A.; Simunovic, S.
1996-08-01
A joint study was conducted by Oak Ridge National Laboratory (ORNL) and the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy-Lightweight Materials (DOE-LWM) Program. the purpose of the study was to assess and benchmark the current modeling capabilities with respect to accuracy of predictions and simulation time. Two modeling capabilities with respect to accuracy of predictions and simulation time. Two simulation platforms were considered in this study, which included the LS-DYNA3D code installed on ORNL`s high- performance computers and the finite element code MARC used at PNL. both ORNL and PNL performed superplastic forming (SPF) analysis on amore » standard butter-tray geometry, which was defined by PNL, to better understand the capabilities of the respective models. The specific geometry was selected and formed at PNL, and the experimental results, such as forming time and thickness at specific locations, were provided for comparisons with numerical predictions. Furthermore, comparisons between the ORNL simulation results, using elasto-plastic analysis, and PNL`s results, using rigid-plastic flow analysis, were performed.« less
Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling
NASA Astrophysics Data System (ADS)
Schum, William K.; Doolittle, Christina M.; Boyarko, George A.
2006-05-01
During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
NASTRAN analysis of Tokamak vacuum vessel using interactive graphics
NASA Technical Reports Server (NTRS)
Miller, A.; Badrian, M.
1978-01-01
Isoparametric quadrilateral and triangular elements were used to represent the vacuum vessel shell structure. For toroidally symmetric loadings, MPCs were employed across model boundaries and rigid format 24 was invoked. Nonsymmetric loadings required the use of the cyclic symmetry analysis available with rigid format 49. NASTRAN served as an important analysis tool in the Tokamak design effort by providing a reliable means for assessing structural integrity. Interactive graphics were employed in the finite element model generation and in the post-processing of results. It was felt that model generation and checkout with interactive graphics reduced the modelling effort and debugging man-hours significantly.
ISAC: A tool for aeroservoelastic modeling and analysis
NASA Technical Reports Server (NTRS)
Adams, William M., Jr.; Hoadley, Sherwood Tiffany
1993-01-01
The capabilities of the Interaction of Structures, Aerodynamics, and Controls (ISAC) system of program modules is discussed. The major modeling, analysis, and data management components of ISAC are identified. Equations of motion are displayed for a Laplace-domain representation of the unsteady aerodynamic forces. Options for approximating a frequency-domain representation of unsteady aerodynamic forces with rational functions of the Laplace variable are shown. Linear time invariant state-space equations of motion that result are discussed. Model generation and analyses of stability and dynamic response characteristics are shown for an aeroelastic vehicle which illustrates some of the capabilities of ISAC as a modeling and analysis tool for aeroelastic applications.
2012-01-01
Background We explore the benefits of applying a new proportional hazard model to analyze survival of breast cancer patients. As a parametric model, the hypertabastic survival model offers a closer fit to experimental data than Cox regression, and furthermore provides explicit survival and hazard functions which can be used as additional tools in the survival analysis. In addition, one of our main concerns is utilization of multiple gene expression variables. Our analysis treats the important issue of interaction of different gene signatures in the survival analysis. Methods The hypertabastic proportional hazards model was applied in survival analysis of breast cancer patients. This model was compared, using statistical measures of goodness of fit, with models based on the semi-parametric Cox proportional hazards model and the parametric log-logistic and Weibull models. The explicit functions for hazard and survival were then used to analyze the dynamic behavior of hazard and survival functions. Results The hypertabastic model provided the best fit among all the models considered. Use of multiple gene expression variables also provided a considerable improvement in the goodness of fit of the model, as compared to use of only one. By utilizing the explicit survival and hazard functions provided by the model, we were able to determine the magnitude of the maximum rate of increase in hazard, and the maximum rate of decrease in survival, as well as the times when these occurred. We explore the influence of each gene expression variable on these extrema. Furthermore, in the cases of continuous gene expression variables, represented by a measure of correlation, we were able to investigate the dynamics with respect to changes in gene expression. Conclusions We observed that use of three different gene signatures in the model provided a greater combined effect and allowed us to assess the relative importance of each in determination of outcome in this data set. These results point to the potential to combine gene signatures to a greater effect in cases where each gene signature represents some distinct aspect of the cancer biology. Furthermore we conclude that the hypertabastic survival models can be an effective survival analysis tool for breast cancer patients. PMID:23241496
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
NASA Astrophysics Data System (ADS)
Li, Shaoxin; Zhang, Yanjiao; Xu, Junfa; Li, Linfang; Zeng, Qiuyao; Lin, Lin; Guo, Zhouyi; Liu, Zhiming; Xiong, Honglian; Liu, Songhao
2014-09-01
This study aims to present a noninvasive prostate cancer screening methods using serum surface-enhanced Raman scattering (SERS) and support vector machine (SVM) techniques through peripheral blood sample. SERS measurements are performed using serum samples from 93 prostate cancer patients and 68 healthy volunteers by silver nanoparticles. Three types of kernel functions including linear, polynomial, and Gaussian radial basis function (RBF) are employed to build SVM diagnostic models for classifying measured SERS spectra. For comparably evaluating the performance of SVM classification models, the standard multivariate statistic analysis method of principal component analysis (PCA) is also applied to classify the same datasets. The study results show that for the RBF kernel SVM diagnostic model, the diagnostic accuracy of 98.1% is acquired, which is superior to the results of 91.3% obtained from PCA methods. The receiver operating characteristic curve of diagnostic models further confirm above research results. This study demonstrates that label-free serum SERS analysis technique combined with SVM diagnostic algorithm has great potential for noninvasive prostate cancer screening.
Fujarewicz, Krzysztof; Lakomiec, Krzysztof
2016-12-01
We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
Framing of Uncertainty in Scientific Publications: Towards Recommendations for Decision Support
NASA Astrophysics Data System (ADS)
Guillaume, J. H. A.; Helgeson, C.; Elsawah, S.; Jakeman, A. J.; Kummu, M.
2016-12-01
Uncertainty is recognised as an essential issue in environmental decision making and decision support. As modellers, we notably use a variety of tools and techniques within an analysis, for example related to uncertainty quantification and model validation. We also address uncertainty by how we present results. For example, experienced modellers are careful to distinguish robust conclusions from those that need further work, and the precision of quantitative results is tailored to their accuracy. In doing so, the modeller frames how uncertainty should be interpreted by their audience. This is an area which extends beyond modelling to fields such as philosophy of science, semantics, discourse analysis, intercultural communication and rhetoric. We propose that framing of uncertainty deserves greater attention in the context of decision support, and that there are opportunities in this area for fundamental research, synthesis and knowledge transfer, development of teaching curricula, and significant advances in managing uncertainty in decision making. This presentation reports preliminary results of a study of framing practices. Specifically, we analyse the framing of uncertainty that is visible in the abstracts from a corpus of scientific articles. We do this through textual analysis of the content and structure of those abstracts. Each finding that appears in an abstract is classified according to the uncertainty framing approach used, using a classification scheme that was iteratively revised based on reflection and comparison amongst three coders. This analysis indicates how frequently the different framing approaches are used, and provides initial insights into relationships between frames, how the frames relate to interpretation of uncertainty, and how rhetorical devices are used by modellers to communicate uncertainty in their work. We propose initial hypotheses for how the resulting insights might influence decision support, and help advance decision making to better address uncertainty.
Rautenberg, Tamlyn Anne; Zerwes, Ute; Lee, Way Seah
2018-01-01
Objective To perform cost utility (CU) and budget impact (BI) analyses augmented by scenario analyses of critical model structure components to evaluate racecadotril as adjuvant to oral rehydration solution (ORS) for children under 5 years with acute diarrhea in Malaysia. Methods A CU model was adapted to evaluate racecadotril plus ORS vs ORS alone for acute diarrhea in children younger than 5 years from a Malaysian public payer’s perspective. A bespoke BI analysis was undertaken in addition to detailed scenario analyses with respect to critical model structure components. Results According to the CU model, the intervention is less costly and more effective than comparator for the base case with a dominant incremental cost-effectiveness ratio of −RM 1,272,833/quality-adjusted life year (USD −312,726/quality-adjusted life year) in favor of the intervention. According to the BI analysis (assuming an increase of 5% market share per year for racecadotril+ORS for 5 years), the total cumulative incremental percentage reduction in health care expenditure for diarrhea in children is 0.136578%, resulting in a total potential cumulative cost savings of −RM 73,193,603 (USD −17,983,595) over a 5-year period. Results hold true across a range of plausible scenarios focused on critical model components. Conclusion Adjuvant racecadotril vs ORS alone is potentially cost-effective from a Malaysian public payer perspective subject to the assumptions and limitations of the model. BI analysis shows that this translates into potential cost savings for the Malaysian public health care system. Results hold true at evidence-based base case values and over a range of alternate scenarios. PMID:29588606
Bragdon, Charles R; Malchau, Henrik; Yuan, Xunhua; Perinchief, Rebecca; Kärrholm, Johan; Börlin, Niclas; Estok, Daniel M; Harris, William H
2002-07-01
The purpose of this study was to develop and test a phantom model based on actual total hip replacement (THR) components to simulate the true penetration of the femoral head resulting from polyethylene wear. This model was used to study both the accuracy and the precision of radiostereometric analysis, RSA, in measuring wear. We also used this model to evaluate optimum tantalum bead configuration for this particular cup design when used in a clinical setting. A physical model of a total hip replacement (a phantom) was constructed which could simulate progressive, three-dimensional (3-D) penetration of the femoral head into the polyethylene component of a THR. Using a coordinate measuring machine (CMM) the positioning of the femoral head using the phantom was measured to be accurate to within 7 microm. The accuracy and precision of an RSA analysis system was determined from five repeat examinations of the phantom using various experimental set-ups of the phantom. The accuracy of the radiostereometric analysis, in this optimal experimental set-up studied was 33 microm for the medial direction, 22 microm for the superior direction, 86 microm for the posterior direction and 55 microm for the resultant 3-D vector length. The corresponding precision at the 95% confidence interval of the test results for repositioning the phantom five times, measured 8.4 microm for the medial direction, 5.5 microm for the superior direction, 16.0 microm for the posterior direction, and 13.5 microm for the resultant 3-D vector length. This in vitro model is proposed as a useful tool for developing a standard for the evaluation of radiostereometric and other radiographic methods used to measure in vivo wear.
Simulation analysis of an integrated model for dynamic cellular manufacturing system
NASA Astrophysics Data System (ADS)
Hao, Chunfeng; Luan, Shichao; Kong, Jili
2017-05-01
Application of dynamic cellular manufacturing system (DCMS) is a well-known strategy to improve manufacturing efficiency in the production environment with high variety and low volume of production. Often, neither the trade-off of inter and intra-cell material movements nor the trade-off of hiring and firing of operators are examined in details. This paper presents simulation results of an integrated mixed-integer model including sensitivity analysis for several numerical examples. The comprehensive model includes cell formation, inter and intracellular materials handling, inventory and backorder holding, operator assignment (including resource adjustment) and flexible production routing. The model considers multi-production planning with flexible resources (machines and operators) where each period has different demands. The results verify the validity and sensitivity of the proposed model using a genetic algorithm.
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2009-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
NASA Astrophysics Data System (ADS)
Castaldo, R.; Tizzani, P.; Lollino, P.; Calò, F.; Ardizzone, F.; Lanari, R.; Guzzetti, F.; Manunta, M.
2015-11-01
The aim of this paper is to propose a methodology to perform inverse numerical modelling of slow landslides that combines the potentialities of both numerical approaches and well-known remote-sensing satellite techniques. In particular, through an optimization procedure based on a genetic algorithm, we minimize, with respect to a proper penalty function, the difference between the modelled displacement field and differential synthetic aperture radar interferometry (DInSAR) deformation time series. The proposed methodology allows us to automatically search for the physical parameters that characterize the landslide behaviour. To validate the presented approach, we focus our analysis on the slow Ivancich landslide (Assisi, central Italy). The kinematical evolution of the unstable slope is investigated via long-term DInSAR analysis, by exploiting about 20 years of ERS-1/2 and ENVISAT satellite acquisitions. The landslide is driven by the presence of a shear band, whose behaviour is simulated through a two-dimensional time-dependent finite element model, in two different physical scenarios, i.e. Newtonian viscous flow and a deviatoric creep model. Comparison between the model results and DInSAR measurements reveals that the deviatoric creep model is more suitable to describe the kinematical evolution of the landslide. This finding is also confirmed by comparing the model results with the available independent inclinometer measurements. Our analysis emphasizes that integration of different data, within inverse numerical models, allows deep investigation of the kinematical behaviour of slow active landslides and discrimination of the driving forces that govern their deformation processes.
Caballero, Julio; Fernández, Michael; Coll, Deysma
2010-12-01
Three-dimensional quantitative structure-activity relationship studies were carried out on a series of 28 organosulphur compounds as 15-lipoxygenase inhibitors using comparative molecular field analysis and comparative molecular similarity indices analysis. Quantitative information on structure-activity relationships is provided for further rational development and direction of selective synthesis. All models were carried out over a training set including 22 compounds. The best comparative molecular field analysis model only included steric field and had a good Q² = 0.789. Comparative molecular similarity indices analysis overcame the comparative molecular field analysis results: the best comparative molecular similarity indices analysis model also only included steric field and had a Q² = 0.894. In addition, this model predicted adequately the compounds contained in the test set. Furthermore, plots of steric comparative molecular similarity indices analysis field allowed conclusions to be drawn for the choice of suitable inhibitors. In this sense, our model should prove useful in future 15-lipoxygenase inhibitor design studies. © 2010 John Wiley & Sons A/S.
Stability analysis and application of a mathematical cholera model.
Liao, Shu; Wang, Jin
2011-07-01
In this paper, we conduct a dynamical analysis of the deterministic cholera model proposed in [9]. We study the stability of both the disease-free and endemic equilibria so as to explore the complex epidemic and endemic dynamics of the disease. We demonstrate a real-world application of this model by investigating the recent cholera outbreak in Zimbabwe. Meanwhile, we present numerical simulation results to verify the analytical predictions.
Integrated Optical Design Analysis (IODA): New Test Data and Modeling Features
NASA Technical Reports Server (NTRS)
Moore, Jim; Troy, Ed; Patrick, Brian
2003-01-01
A general overview of the capabilities of the IODA ("Integrated Optical Design Analysis") exchange of data and modeling results between thermal, structures, optical design, and testing engineering disciplines. This presentation focuses on new features added to the software that allow measured test data to be imported into the IODA environment for post processing or comparisons with pretest model predictions. software is presented. IODA promotes efficient
NASA Astrophysics Data System (ADS)
Budzisz, Joanna; Wróblewski, Zbigniew
2016-03-01
The article presents a method of modelling a vaccum circuit breaker in the ATP/EMTP package, the results of the verification of the correctness of the developed digital circuit breaker model operation and its practical usefulness for analysis of overvoltages and overcurrents occurring in commutated capacitive electrical circuits and also examples of digital simulations of overvoltages and overcurrents in selected electrical circuits.
System capacity and economic modeling computer tool for satellite mobile communications systems
NASA Technical Reports Server (NTRS)
Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.
1988-01-01
A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.
A generalized spatiotemporal covariance model for stationary background in analysis of MEG data.
Plis, S M; Schmidt, D M; Jun, S C; Ranken, D M
2006-01-01
Using a noise covariance model based on a single Kronecker product of spatial and temporal covariance in the spatiotemporal analysis of MEG data was demonstrated to provide improvement in the results over that of the commonly used diagonal noise covariance model. In this paper we present a model that is a generalization of all of the above models. It describes models based on a single Kronecker product of spatial and temporal covariance as well as more complicated multi-pair models together with any intermediate form expressed as a sum of Kronecker products of spatial component matrices of reduced rank and their corresponding temporal covariance matrices. The model provides a framework for controlling the tradeoff between the described complexity of the background and computational demand for the analysis using this model. Ways to estimate the value of the parameter controlling this tradeoff are also discussed.
Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David
2015-08-01
Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R
2018-06-03
Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.
Cau, Andrea
2017-01-01
Bayesian phylogenetic methods integrating simultaneously morphological and stratigraphic information have been applied increasingly among paleontologists. Most of these studies have used Bayesian methods as an alternative to the widely-used parsimony analysis, to infer macroevolutionary patterns and relationships among species-level or higher taxa. Among recently introduced Bayesian methodologies, the Fossilized Birth-Death (FBD) model allows incorporation of hypotheses on ancestor-descendant relationships in phylogenetic analyses including fossil taxa. Here, the FBD model is used to infer the relationships among an ingroup formed exclusively by fossil individuals, i.e., dipnoan tooth plates from four localities in the Ain el Guettar Formation of Tunisia. Previous analyses of this sample compared the results of phylogenetic analysis using parsimony with stratigraphic methods, inferred a high diversity (five or more genera) in the Ain el Guettar Formation, and interpreted it as an artifact inflated by depositional factors. In the analysis performed here, the uncertainty on the chronostratigraphic relationships among the specimens was included among the prior settings. The results of the analysis confirm the referral of most of the specimens to the taxa Asiatoceratodus , Equinoxiodus, Lavocatodus and Neoceratodus , but reject those to Ceratodus and Ferganoceratodus . The resulting phylogeny constrained the evolution of the Tunisian sample exclusively in the Early Cretaceous, contrasting with the previous scenario inferred by the stratigraphically-calibrated topology resulting from parsimony analysis. The phylogenetic framework also suggests that (1) the sampled localities are laterally equivalent, (2) but three localities are restricted to the youngest part of the section; both results are in agreement with previous stratigraphic analyses of these localities. The FBD model of specimen-level units provides a novel tool for phylogenetic inference among fossils but also for independent tests of stratigraphic scenarios.
Joint modelling of longitudinal CEA tumour marker progression and survival data on breast cancer
NASA Astrophysics Data System (ADS)
Borges, Ana; Sousa, Inês; Castro, Luis
2017-06-01
This work proposes the use of Biostatistics methods to study breast cancer in patients of Braga's Hospital Senology Unit, located in Portugal. The primary motivation is to contribute to the understanding of the progression of breast cancer, within the Portuguese population, using a more complex statistical model assumptions than the traditional analysis that take into account a possible existence of a serial correlation structure within a same subject observations. We aim to infer which risk factors aect the survival of Braga's Hospital patients, diagnosed with breast tumour. Whilst analysing risk factors that aect a tumour markers used on the surveillance of disease progression the Carcinoembryonic antigen (CEA). As survival and longitudinal processes may be associated, it is important to model these two processes together. Hence, a joint modelling of these two processes to infer on the association of these was conducted. A data set of 540 patients, along with 50 variables, was collected from medical records of the Hospital. A joint model approach was used to analyse these data. Two dierent joint models were applied to the same data set, with dierent parameterizations which give dierent interpretations to model parameters. These were used by convenience as the ones implemented in R software. Results from the two models were compared. Results from joint models, showed that the longitudinal CEA values were signicantly associated with the survival probability of these patients. A comparison between parameter estimates obtained in this analysis and previous independent survival[4] and longitudinal analysis[5][6], lead us to conclude that independent analysis brings up bias parameter estimates. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary. Results indicate that the longitudinal progression of CEA is signicantly associated with the probability of survival of these patients. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary.
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
Analysis of Hepatic Blood Flow Using Chaotic Models
Cohen, M. E.; Moazamipour, H.; Hudson, D. L.; Anderson, M. F.
1990-01-01
The study of chaos in physical systems is an important new theoretical development in modeling which has emerged in the last fifteen years. It is particularly useful in explaining phenomena which arise in nonlinear dynamic systems, for which previous mathematical models produced results with intractable solutions. Analysis of blood flow is such an application. In the work described here, chaotic models are used to analyze hepatic artery and portal vein blood flow obtained from a pulsed Doppler ultrasonic flowmeter implanted in dogs. ImagesFigure 3
ACME Priority Metrics (A-PRIME)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Zender, Charlie; Van Roekel, Luke
A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.
An alternative assessment of second-order closure models in turbulent shear flows
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Gatski, Thomas B.
1994-01-01
The performance of three recently proposed second-order closure models is tested in benchmark turbulent shear flows. Both homogeneous shear flow and the log-layer of an equilibrium turbulent boundary layer are considered for this purpose. An objective analysis of the results leads to an assessment of these models that stands in contrast to that recently published by other authors. A variety of pitfalls in the formulation and testing of second-order closure models are uncovered by this analysis.
The January 2015 Repressurization of ISS ATCS Loop B - Analysis Limitations and Concerns
NASA Technical Reports Server (NTRS)
Ungar, Eugene; Rankin, J. Gary; Schaff, Mary; Figueroa, Marcelino
2015-01-01
In January 2013 a false ammonia leak alarm resulted in the shutdown and partial depressurization of one of the two International Space Station (ISS) External Active Thermal Control System (EATCS) loops. The depressurization resulted in a vapor bubble of 18 liters in warm parts of the stagnant loop. To repressurize the loop and regain system operation, liquid would have to be moved from the Ammonia Tank Assembly (ATA) into the loop. This resulted in the possibility of moving cold (as low as -30 C) ammonia into the water-filled Internal Active Thermal Control System (IATCS) interface heat exchangers. Before moving forward, the freezing potential of the repressurization was evaluated through analysis - using both a Thermal Desktop SINDA/FLUINT model and hand calculations. The models yielded very different results, but both models indicated that heat exchanger freezing was not an issue. Therefore, the repressurization proceeded. The presentation describes the physical situation of the EATCS prior to repressurization and discusses the potential limits and pitfalls of the repressurization. The pre-repressurization analytical models and their results are discussed. The successful repressurization is describled and the results of a post-event model assessment is detailed.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
The art of maturity modeling. Part 2. Alternative models and sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waples, D.W.; Suizu, Masahiro; Kamata, Hiromi
1992-01-01
The sensitivity of exploration decisions to variations in several input parameters for maturity modeling was examined for the MITI Rumoi well, Hokkaido, Japan. Decisions were almost completely insensitive to uncertainties about formation age and erosional removal across some unconformities, but were more sensitive to changes in removal during unconformities which occurred near maximum paleotemperatures. Exploration decisions were not very sensitive to the choice of a particular kinetic model for hydrocarbon generation. Uncertainties in kerogen type and the kinetics of different kerogen types are more serious than differences among the various kinetic models. Results of modeling using the TTI method weremore » unsatisfactory. Thermal history and timing and amount of hydrocarbon generation estimated or calculated using the TTI method were greatly different from those obtained using a purely kinetic model. The authors strongly recommend use of the kinetic R{sub o} method instead of the TTI method. If they had lacked measured R{sub o} data, subsurface temperature data, or both, their confidence in the modeling results would have been sharply reduced. Conceptual models for predicting heat flow and thermal conductivity are simply too weak at present to allow one to carry out highly meaningful modeling unless the input is constrained by measured data. Maturity modeling therefore requires the use of more, not fewer, measured temperature and maturity data. The use of sensitivity analysis in maturity modeling is very important for understanding the geologic system, for knowing what level of confidence to place on the results, and for determining what new types of data would be most necessary to improve confidence. Sensitivity analysis can be carried out easily using a rapid, interactive maturity-modeling program.« less
Solar Dynamics Observatory (SDO) HGAS Induced Jitter
NASA Technical Reports Server (NTRS)
Liu, Alice; Blaurock, Carl; Liu, Kuo-Chia; Mule, Peter
2008-01-01
This paper presents the results of a comprehensive assessment of High Gain Antenna System induced jitter on the Solar Dynamics Observatory. The jitter prediction is created using a coupled model of the structural dynamics, optical response, control systems, and stepper motor actuator electromechanical dynamics. The paper gives an overview of the model components, presents the verification processes used to evaluate the models, describes validation and calibration tests and model-to-measurement comparison results, and presents the jitter analysis methodology and results.
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
NASA Astrophysics Data System (ADS)
Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.
2017-04-01
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.
Razi, Bahman; Anani Sarab, Gholamreza; Omidkhoda, Azadeh; Alizadeh, Shahab
2018-03-01
Several studies have evaluated the association between the multidrug resistance 1 (MDR1) polymorphism (rs1045642 C > T) and multiple myeloma (MM). However, the results were not consistent. Therefore, to reach a comprehensive and reliable answer we determined the association of the MDR1 (rs1045642 C > T) polymorphism and MM in the context of meta-analysis. All eligible studies published in EMBASE, PubMed, and Web of Science databases before July 2017 were reviewed. Subsequently, to assess the strength of association in the dominant model, recessive model, allelic model, homozygotes contrast, and heterozygotes contrast, pooled odds ratios and 95% confidence intervals (CIs) were calculated by the fixed effects model. A total of four case-control studies with 395 MM cases and 418 healthy controls were included in the meta-analysis. The overall results showed no significant association between the MDR1 (rs1045642 C > T) polymorphism and the risk of MM in genetic models (dominant model: OR = 1.04, 95% CI = 0.78-1.38; recessive model: OR = 0.74, 95% CI = 0.52-1.06; allelic model: OR = 0.90, 95% CI = 0.73-1.11; TT vs. CC: OR = 0.80, 95% CI = 0.51-1.25; and CT vs. CC: OR = 1.12, 95% CI = 0.77-1.62). No evidence of publication bias was detected except for the analysis of the recessive model. This meta-analysis suggests that the MDR1 C > T polymorphism was not associated with the risk of MM. To confirm these findings, further comprehensive and well-designed studies are needed.