Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars
2016-04-12
A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution
NASA Astrophysics Data System (ADS)
Zhao, X.; Suganuma, Y.; Fujii, M.
2017-12-01
Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Model reduction by weighted Component Cost Analysis
NASA Technical Reports Server (NTRS)
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
NASA Astrophysics Data System (ADS)
Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.
2018-03-01
A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.
A 5 year (2002-2006) simulation of CMAQ covering the eastern United States is evaluated using principle component analysis in order to identify and characterize statistically significant patterns of model bias. Such analysis is useful in that in can identify areas of poor model ...
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Conceptual model of iCAL4LA: Proposing the components using comparative analysis
NASA Astrophysics Data System (ADS)
Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul
2016-08-01
This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nee, K.; Bryan, S.; Levitskaia, T.
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Nee, K.; Bryan, S.; Levitskaia, T.; ...
2017-12-28
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Takane, Yoshio
2004-01-01
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…
Veronica C. Lessard
2001-01-01
The Forest Inventory and Analysis (FIA) program of the North Central Research Station (NCRS), USDA Forest Service, has developed nonlinear, individual-tree, distance-independent annual diameter growth models. The models are calibrated for species groups and formulated as the product of an average diameter growth component and a modifier component. The regional models...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritsky, K.J.; Miller, D.L.; Cernansky, N.P.
1994-09-01
A methodology was introduced for modeling the devolatilization characteristics of refuse-derived fuel (RFD) in terms of temperature-dependent weight loss. The basic premise of the methodology is that RDF is modeled as a combination of select municipal solid waste (MSW) components. Kinetic parameters are derived for each component from thermogravimetric analyzer (TGA) data measured at a specific set of conditions. These experimentally derived parameters, along with user-derived parameters, are inputted to model equations for the purpose of calculating thermograms for the components. The component thermograms are summed to create a composite thermogram that is an estimate of the devolatilization for themore » as-modeled RFD. The methodology has several attractive features as a thermal analysis tool for waste fuels. 7 refs., 10 figs., 3 tabs.« less
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1988-01-01
This annual status report presents the results of work performed during the fourth year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes permitting more accurate and efficient 3-D analysis of selected hot section components, i.e., combustor liners, turbine blades and turbine vanes. The computer codes embody a progression of math models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. Volume 1 of this report discusses the special finite element models developed during the fourth year of the contract.
Mixture modelling for cluster analysis.
McLachlan, G J; Chang, S U
2004-10-01
Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.
Personal Computer Transport Analysis Program
NASA Technical Reports Server (NTRS)
DiStefano, Frank, III; Wobick, Craig; Chapman, Kirt; McCloud, Peter
2012-01-01
The Personal Computer Transport Analysis Program (PCTAP) is C++ software used for analysis of thermal fluid systems. The program predicts thermal fluid system and component transients. The output consists of temperatures, flow rates, pressures, delta pressures, tank quantities, and gas quantities in the air, along with air scrubbing component performance. PCTAP s solution process assumes that the tubes in the system are well insulated so that only the heat transfer between fluid and tube wall and between adjacent tubes is modeled. The system described in the model file is broken down into its individual components; i.e., tubes, cold plates, heat exchangers, etc. A solution vector is built from the components and a flow is then simulated with fluid being transferred from one component to the next. The solution vector of components in the model file is built at the initiation of the run. This solution vector is simply a list of components in the order of their inlet dependency on other components. The component parameters are updated in the order in which they appear in the list at every time step. Once the solution vectors have been determined, PCTAP cycles through the components in the solution vector, executing their outlet function for each time-step increment.
Space Shuttle critical function audit
NASA Technical Reports Server (NTRS)
Sacks, Ivan J.; Dipol, John; Su, Paul
1990-01-01
A large fault-tolerance model of the main propulsion system of the US space shuttle has been developed. This model is being used to identify single components and pairs of components that will cause loss of shuttle critical functions. In addition, this model is the basis for risk quantification of the shuttle. The process used to develop and analyze the model is digraph matrix analysis (DMA). The DMA modeling and analysis process is accessed via a graphics-based computer user interface. This interface provides coupled display of the integrated system schematics, the digraph models, the component database, and the results of the fault tolerance and risk analyses.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify deficiencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the measured data, a sensitivity analysis of the model parameters with emphasis on the definition of the convection velocity parameter, and a least-squares fit of the predicted to the measured shock-associated noise component spectra, resulted in a new definition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Component-specific modeling. [jet engine hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.
1992-01-01
Accomplishments are described for a 3 year program to develop methodology for component-specific modeling of aircraft hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models, (2) geometry model generators, (3) remeshing, (4) specialty three-dimensional inelastic structural analysis, (5) computationally efficient solvers, (6) adaptive solution strategies, (7) engine performance parameters/component response variables decomposition and synthesis, (8) integrated software architecture and development, and (9) validation cases for software developed.
Stress analysis of 27% scale model of AH-64 main rotor hub
NASA Technical Reports Server (NTRS)
Hodges, R. V.
1985-01-01
Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
Zadpoor, Amir A; Weinans, Harrie
2015-03-18
Patient-specific analysis of bones is considered an important tool for diagnosis and treatment of skeletal diseases and for clinical research aimed at understanding the etiology of skeletal diseases and the effects of different types of treatment on their progress. In this article, we discuss how integration of several important components enables accurate and cost-effective patient-specific bone analysis, focusing primarily on patient-specific finite element (FE) modeling of bones. First, the different components are briefly reviewed. Then, two important aspects of patient-specific FE modeling, namely integration of modeling components and automation of modeling approaches, are discussed. We conclude with a section on validation of patient-specific modeling results, possible applications of patient-specific modeling procedures, current limitations of the modeling approaches, and possible areas for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
NASA Technical Reports Server (NTRS)
Chatterjee, Sharmista
1993-01-01
Our first goal in this project was to perform a systems analysis of a closed loop Environmental Control Life Support System (ECLSS). This pertains to the development of a model of an existing real system from which to assess the state or performance of the existing system. Systems analysis is applied to conceptual models obtained from a system design effort. For our modelling purposes we used a simulator tool called ASPEN (Advanced System for Process Engineering). Our second goal was to evaluate the thermodynamic efficiency of the different components comprising an ECLSS. Use is made of the second law of thermodynamics to determine the amount of irreversibility of energy loss of each component. This will aid design scientists in selecting the components generating the least entropy, as our penultimate goal is to keep the entropy generation of the whole system at a minimum.
A Cost-Utility Model of Care for Peristomal Skin Complications
Inglese, Gary; Manson, Andrea; Townshend, Arden
2016-01-01
PURPOSE: The aim of this study was to evaluate the economic and humanistic implications of using ostomy components to prevent subsequent peristomal skin complications (PSCs) in individuals who experience an initial, leakage-related PSC event. DESIGN: Cost-utility analysis. METHODS: We developed a simple decision model to consider, from a payer's perspective, PSCs managed with and without the use of ostomy components over 1 year. The model evaluated the extent to which outcomes associated with the use of ostomy components (PSC events avoided; quality-adjusted life days gained) offset the costs associated with their use. RESULTS: Our base case analysis of 1000 hypothetical individuals over 1 year assumes that using ostomy components following a first PSC reduces recurrent events versus PSC management without components. In this analysis, component acquisition costs were largely offset by lower resource use for ostomy supplies (barriers; pouches) and lower clinical utilization to manage PSCs. The overall annual average resource use for individuals using components was about 6.3% ($139) higher versus individuals not using components. Each PSC event avoided yielded, on average, 8 additional quality-adjusted life days over 1 year. CONCLUSIONS: In our analysis, (1) acquisition costs for ostomy components were offset in whole or in part by the use of fewer ostomy supplies to manage PSCs and (2) use of ostomy components to prevent PSCs produced better outcomes (fewer repeat PSC events; more health-related quality-adjusted life days) over 1 year compared to not using components. PMID:26633166
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
ERIC Educational Resources Information Center
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S.
2012-01-01
We propose a new method of structural equation modeling (SEM) for longitudinal and time series data, named Dynamic GSCA (Generalized Structured Component Analysis). The proposed method extends the original GSCA by incorporating a multivariate autoregressive model to account for the dynamic nature of data taken over time. Dynamic GSCA also…
Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality
NASA Astrophysics Data System (ADS)
Pratama, Mega Aria; Rosyidi, Cucuk Nur
2017-11-01
This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.
Multiple Component Event-Related Potential (mcERP) Estimation
NASA Technical Reports Server (NTRS)
Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)
2002-01-01
We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.
Sullivan, Karen A; Lurie, Janine K
2017-01-01
The study examined the component structure of the Neurobehavioral Symptom Inventory (NSI) under five different models. The evaluated models comprised the full NSI (NSI-22) and the NSI-20 (NSI minus two orphan items). A civilian nonclinical sample was used. The 575 volunteers were predominantly university students who screened negative for mild TBI. The study design was cross-sectional, with questionnaires administered online. The main measure was the Neurobehavioral Symptom Inventory. Subscale, total and embedded validity scores were derived (the Validity-10, the LOW6, and the NIM5). In both models, the principal components analysis yielded two intercorrelated components (psychological and somatic/sensory) with acceptable internal consistency (alphas > 0.80). In this civilian nonclinical sample, the NSI had two underlying components. These components represent psychological and somatic/sensory neurobehavioral symptoms.
Analysis of a Shock-Associated Noise Prediction Model Using Measured Jet Far-Field Noise Data
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sharpe, Jacob A.
2014-01-01
A code for predicting supersonic jet broadband shock-associated noise was assessed us- ing a database containing noise measurements of a jet issuing from a convergent nozzle. The jet was operated at 24 conditions covering six fully expanded Mach numbers with four total temperature ratios. To enable comparisons of the predicted shock-associated noise component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise component spectra. Comparisons between predicted and measured shock-associated noise component spectra were used to identify de ciencies in the prediction model. Proposed revisions to the model, based on a study of the overall sound pressure levels for the shock-associated noise component of the mea- sured data, a sensitivity analysis of the model parameters with emphasis on the de nition of the convection velocity parameter, and a least-squares t of the predicted to the mea- sured shock-associated noise component spectra, resulted in a new de nition for the source strength spectrum in the model. An error analysis showed that the average error in the predicted spectra was reduced by as much as 3.5 dB for the revised model relative to the average error for the original model.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Body composition analysis: Cellular level modeling of body component ratios.
Wang, Z; Heymsfield, S B; Pi-Sunyer, F X; Gallagher, D; Pierson, R N
2008-01-01
During the past two decades, a major outgrowth of efforts by our research group at St. Luke's-Roosevelt Hospital is the development of body composition models that include cellular level models, models based on body component ratios, total body potassium models, multi-component models, and resting energy expenditure-body composition models. This review summarizes these models with emphasis on component ratios that we believe are fundamental to understanding human body composition during growth and development and in response to disease and treatments. In-vivo measurements reveal that in healthy adults some component ratios show minimal variability and are relatively 'stable', for example total body water/fat-free mass and fat-free mass density. These ratios can be effectively applied for developing body composition methods. In contrast, other ratios, such as total body potassium/fat-free mass, are highly variable in vivo and therefore are less useful for developing body composition models. In order to understand the mechanisms governing the variability of these component ratios, we have developed eight cellular level ratio models and from them we derived simplified models that share as a major determining factor the ratio of extracellular to intracellular water ratio (E/I). The E/I value varies widely among adults. Model analysis reveals that the magnitude and variability of each body component ratio can be predicted by correlating the cellular level model with the E/I value. Our approach thus provides new insights into and improved understanding of body composition ratios in adults.
NASA Astrophysics Data System (ADS)
Palaniswamy, Hariharasudhan; Kanthadai, Narayan; Roy, Subir; Beauchesne, Erwan
2011-08-01
Crash, NVH (Noise, Vibration, Harshness), and durability analysis are commonly deployed in structural CAE analysis for mechanical design of components especially in the automotive industry. Components manufactured by stamping constitute a major portion of the automotive structure. In CAE analysis they are modeled at a nominal state with uniform thickness and no residual stresses and strains. However, in reality the stamped components have non-uniformly distributed thickness and residual stresses and strains resulting from stamping. It is essential to consider the stamping information in CAE analysis to accurately model the behavior of the sheet metal structures under different loading conditions. Especially with the current emphasis on weight reduction by replacing conventional steels with aluminum and advanced high strength steels it is imperative to avoid over design. Considering this growing need in industry, a highly automated and robust method has been integrated within Altair Hyperworks® to initialize sheet metal components in CAE models with stamping data. This paper demonstrates this new feature and the influence of stamping data for a full car frontal crash analysis.
NDARC NASA Design and Analysis of Rotorcraft. Appendix 5; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC: NASA Design and Analysis of Rotorcraft. Appendix 3; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet speci?ed requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft con?gurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates con?guration ?exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-?delity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy speci?ed design conditions and missions. The analysis tasks can include off-design mission performance calculation, ?ight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft con?gurations is facilitated, while retaining the capability to model novel and advanced concepts. Speci?c rotorcraft con?gurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-?delity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 2
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tilt-rotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft. Appendix 6; Input
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne R.
2009-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and tailrotor helicopter; tandem helicopter; coaxial helicopter; and tiltrotors. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC - NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2015-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft Theory Appendix 1
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
Engine structures analysis software: Component Specific Modeling (COSMO)
NASA Astrophysics Data System (ADS)
McKnight, R. L.; Maffeo, R. J.; Schwartz, S.
1994-08-01
A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.
Engine Structures Analysis Software: Component Specific Modeling (COSMO)
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.
1994-01-01
A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
Accomplishments are described for the second year effort of a 3-year program to develop methodology for component specific modeling of aircraft engine hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models; (2) geometry model generators; (3) remeshing; (4) specialty 3-D inelastic stuctural analysis; (5) computationally efficient solvers, (6) adaptive solution strategies; (7) engine performance parameters/component response variables decomposition and synthesis; (8) integrated software architecture and development, and (9) validation cases for software developed.
The Layer-Based, Pragmatic Model of the Communication Process.
ERIC Educational Resources Information Center
Targowski, Andrew S.; Bowman, Joel P.
1988-01-01
Presents the Targowski/Bowman model of the communication process, which introduces a new paradigm that isolates the various components for individual measurement and analysis, places these components into a unified whole, and places communication and its business component into a larger cultural context. (MM)
Modeling of power electronic systems with EMTP
NASA Technical Reports Server (NTRS)
Tam, Kwa-Sur; Dravid, Narayan V.
1989-01-01
In view of the potential impact of power electronics on power systems, there is need for a computer modeling/analysis tool to perform simulation studies on power systems with power electronic components as well as to educate engineering students about such systems. The modeling of the major power electronic components of the NASA Space Station Freedom Electric Power System is described along with ElectroMagnetic Transients Program (EMTP) and it is demonstrated that EMTP can serve as a very useful tool for teaching, design, analysis, and research in the area of power systems with power electronic components. EMTP modeling of power electronic circuits is described and simulation results are presented.
[New method of mixed gas infrared spectrum analysis based on SVM].
Bai, Peng; Xie, Wen-Jun; Liu, Jun-Hua
2007-07-01
A new method of infrared spectrum analysis based on support vector machine (SVM) for mixture gas was proposed. The kernel function in SVM was used to map the seriously overlapping absorption spectrum into high-dimensional space, and after transformation, the high-dimensional data could be processed in the original space, so the regression calibration model was established, then the regression calibration model with was applied to analyze the concentration of component gas. Meanwhile it was proved that the regression calibration model with SVM also could be used for component recognition of mixture gas. The method was applied to the analysis of different data samples. Some factors such as scan interval, range of the wavelength, kernel function and penalty coefficient C that affect the model were discussed. Experimental results show that the component concentration maximal Mean AE is 0.132%, and the component recognition accuracy is higher than 94%. The problems of overlapping absorption spectrum, using the same method for qualitative and quantitative analysis, and limit number of training sample, were solved. The method could be used in other mixture gas infrared spectrum analyses, promising theoretic and application values.
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Factor Analysis via Components Analysis
ERIC Educational Resources Information Center
Bentler, Peter M.; de Leeuw, Jan
2011-01-01
When the factor analysis model holds, component loadings are linear combinations of factor loadings, and vice versa. This interrelation permits us to define new optimization criteria and estimation methods for exploratory factor analysis. Although this article is primarily conceptual in nature, an illustrative example and a small simulation show…
Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš
2016-01-01
Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540
Generalized Structured Component Analysis with Latent Interactions
ERIC Educational Resources Information Center
Hwang, Heungsun; Ho, Moon-Ho Ringo; Lee, Jonathan
2010-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA…
Regularized Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Vibration signature analysis of multistage gear transmission
NASA Technical Reports Server (NTRS)
Choy, F. K.; Tu, Y. K.; Savage, M.; Townsend, D. P.
1989-01-01
An analysis is presented for multistage multimesh gear transmission systems. The analysis predicts the overall system dynamics and the transmissibility to the gear box or the enclosed structure. The modal synthesis approach of the analysis treats the uncoupled lateral/torsional model characteristics of each stage or component independently. The vibration signature analysis evaluates the global dynamics coupling in the system. The method synthesizes the interaction of each modal component or stage with the nonlinear gear mesh dynamics and the modal support geometry characteristics. The analysis simulates transient and steady state vibration events to determine the resulting torque variations, speeds, changes, rotor imbalances, and support gear box motion excitations. A vibration signature analysis examines the overall dynamic characteristics of the system, and the individual model component responses. The gear box vibration analysis also examines the spectral characteristics of the support system.
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
ERIC Educational Resources Information Center
Chou, Yeh-Tai; Wang, Wen-Chung
2010-01-01
Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…
ECOPASS - a multivariate model used as an index of growth performance of poplar clones
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceulemans, R.; Impens, I.
The model (ECOlogical PASSport) reported was constructed by principal component analysis from a combination of biochemical, anatomical/morphological and ecophysiological gas exchange parameters measured on 5 fast growing poplar clones. Productivity data were 10 selected trees in 3 plantations in Belgium and given as m.a.i.(b.a.). The model is shown to be able to reflect not only genetic origin and the relative effects of the different parameters of the clones, but also their production potential. Multiple regression analysis of the 4 principal components showed a high cumulative correlation (96%) between the 3 components related to ecophysiological, biochemical and morphological parameters, and productivity;more » the ecophysiological component alone correlated 85% with productivity.« less
Spain, Seth M; Miner, Andrew G; Kroonenberg, Pieter M; Drasgow, Fritz
2010-08-06
Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of analyzing momentary work behavior using experience sampling methods. The article also examines a previously unused set of methods for analyzing data produced by experience sampling. These methods are known collectively as multiway component analysis. Two archetypal techniques of multimode factor analysis, the Parallel factor analysis and the Tucker3 models, are used to analyze data from Miner, Glomb, and Hulin's (2010) experience sampling study of work behavior. The efficacy of these techniques for analyzing experience sampling data is discussed as are the substantive multimode component models obtained.
SaaS Platform for Time Series Data Handling
NASA Astrophysics Data System (ADS)
Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail
2018-02-01
The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES
An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...
Jesse, Stephen; Kalinin, Sergei V
2009-02-25
An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.
NASA Astrophysics Data System (ADS)
Durigon, Angelica; Lier, Quirijn de Jong van; Metselaar, Klaas
2016-10-01
To date, measuring plant transpiration at canopy scale is laborious and its estimation by numerical modelling can be used to assess high time frequency data. When using the model by Jacobs (1994) to simulate transpiration of water stressed plants it needs to be reparametrized. We compare the importance of model variables affecting simulated transpiration of water stressed plants. A systematic literature review was performed to recover existing parameterizations to be tested in the model. Data from a field experiment with common bean under full and deficit irrigation were used to correlate estimations to forcing variables applying principal component analysis. New parameterizations resulted in a moderate reduction of prediction errors and in an increase in model performance. Ags model was sensitive to changes in the mesophyll conductance and leaf angle distribution parameterizations, allowing model improvement. Simulated transpiration could be separated in temporal components. Daily, afternoon depression and long-term components for the fully irrigated treatment were more related to atmospheric forcing variables (specific humidity deficit between stomata and air, relative air humidity and canopy temperature). Daily and afternoon depression components for the deficit-irrigated treatment were related to both atmospheric and soil dryness, and long-term component was related to soil dryness.
Nguyen, Quoc Dinh; Fernandez, Nicolas; Karsenti, Thierry; Charlin, Bernard
2014-12-01
Although reflection is considered a significant component of medical education and practice, the literature does not provide a consensual definition or model for it. Because reflection has taken on multiple meanings, it remains difficult to operationalise. A standard definition and model are needed to improve the development of practical applications of reflection. This study was conducted in order to identify, explore and analyse the most influential conceptualisations of reflection, and to develop a new theory-informed and unified definition and model of reflection. A systematic review was conducted to identify the 15 most cited authors in papers on reflection published during the period from 2008 to 2012. The authors' definitions and models were extracted. An exploratory thematic analysis was carried out and identified seven initial categories. Categories were clustered and reworded to develop an integrative definition and model of reflection, which feature core components that define reflection and extrinsic elements that influence instances of reflection. Following our review and analysis, five core components of reflection and two extrinsic elements were identified as characteristics of the reflective thinking process. Reflection is defined as the process of engaging the self (S) in attentive, critical, exploratory and iterative (ACEI) interactions with one's thoughts and actions (TA), and their underlying conceptual frame (CF), with a view to changing them and a view on the change itself (VC). Our conceptual model consists of the defining core components, supplemented with the extrinsic elements that influence reflection. This article presents a new theory-informed, five-component definition and model of reflection. We believe these have advantages over previous models in terms of helping to guide the further study, learning, assessment and teaching of reflection. © 2014 John Wiley & Sons Ltd.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M
2007-01-01
We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.
Effect of noise in principal component analysis with an application to ozone pollution
NASA Astrophysics Data System (ADS)
Tsakiri, Katerina G.
This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction
Ekdahl, Anja; Johansson, Maria C; Ahnoff, Martin
2013-04-01
Matrix effects on electrospray ionization were investigated for plasma samples analysed by hydrophilic interaction chromatography (HILIC) in gradient elution mode, and HILIC columns of different chemistries were tested for separation of plasma components and model analytes. By combining mass spectral data with post-column infusion traces, the following components of protein-precipitated plasma were identified and found to have significant effect on ionization: urea, creatinine, phosphocholine, lysophosphocholine, sphingomyelin, sodium ion, chloride ion, choline and proline betaine. The observed effect on ionization was both matrix-component and analyte dependent. The separation of identified plasma components and model analytes on eight columns was compared, using pair-wise linear correlation analysis and principal component analysis (PCA). Large changes in selectivity could be obtained by change of column, while smaller changes were seen when the mobile phase buffer was changed from ammonium formate pH 3.0 to ammonium acetate pH 4.5. While results from PCA and linear correlation analysis were largely in accord, linear correlation analysis was judged to be more straight-forward in terms of conduction and interpretation.
Finite element analysis of helicopter structures
NASA Technical Reports Server (NTRS)
Rich, M. J.
1978-01-01
Application of the finite element analysis is now being expanded to three dimensional analysis of mechanical components. Examples are presented for airframe, mechanical components, and composite structure calculations. Data are detailed on the increase of model size, computer usage, and the effect on reducing stress analysis costs. Future applications for use of finite element analysis for helicopter structures are projected.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
Convection equation modeling: A non-iterative direct matrix solution algorithm for use with SINDA
NASA Technical Reports Server (NTRS)
Schrage, Dean S.
1993-01-01
The determination of the boundary conditions for a component-level analysis, applying discrete finite element and finite difference modeling techniques often requires an analysis of complex coupled phenomenon that cannot be described algebraically. For example, an analysis of the temperature field of a coldplate surface with an integral fluid loop requires a solution to the parabolic heat equation and also requires the boundary conditions that describe the local fluid temperature. However, the local fluid temperature is described by a convection equation that can only be solved with the knowledge of the locally-coupled coldplate temperatures. Generally speaking, it is not computationally efficient, and sometimes, not even possible to perform a direct, coupled phenomenon analysis of the component-level and boundary condition models within a single analysis code. An alternative is to perform a disjoint analysis, but transmit the necessary information between models during the simulation to provide an indirect coupling. For this approach to be effective, the component-level model retains full detail while the boundary condition model is simplified to provide a fast, first-order prediction of the phenomenon in question. Specifically for the present study, the coldplate structure is analyzed with a discrete, numerical model (SINDA) while the fluid loop convection equation is analyzed with a discrete, analytical model (direct matrix solution). This indirect coupling allows a satisfactory prediction of the boundary condition, while not subjugating the overall computational efficiency of the component-level analysis. In the present study a discussion of the complete analysis of the derivation and direct matrix solution algorithm of the convection equation is presented. Discretization is analyzed and discussed to extend of solution accuracy, stability and computation speed. Case studies considering a pulsed and harmonic inlet disturbance to the fluid loop are analyzed to assist in the discussion of numerical dissipation and accuracy. In addition, the issues of code melding or integration with standard class solvers such as SINDA are discussed to advise the user of the potential problems to be encountered.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
Designers workbench: toward real-time immersive modeling
NASA Astrophysics Data System (ADS)
Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu
2000-05-01
This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.
Novel Framework for Reduced Order Modeling of Aero-engine Components
NASA Astrophysics Data System (ADS)
Safi, Ali
The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
NASA Technical Reports Server (NTRS)
Parker, K. C.; Torian, J. G.
1980-01-01
A sample environmental control and life support model performance analysis using the environmental analysis routines library is presented. An example of a complete model set up and execution is provided. The particular model was synthesized to utilize all of the component performance routines and most of the program options.
Multivariate Analysis of Seismic Field Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen
1999-06-01
This report includes the details of the model building procedure and prediction of seismic field data. Principal Components Regression, a multivariate analysis technique, was used to model seismic data collected as two pieces of equipment were cycled on and off. Models built that included only the two pieces of equipment of interest had trouble predicting data containing signals not included in the model. Evidence for poor predictions came from the prediction curves as well as spectral F-ratio plots. Once the extraneous signals were included in the model, predictions improved dramatically. While Principal Components Regression performed well for the present datamore » sets, the present data analysis suggests further work will be needed to develop more robust modeling methods as the data become more complex.« less
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
Integrating Cognitive Task Analysis into Instructional Systems Development.
ERIC Educational Resources Information Center
Ryder, Joan M.; Redding, Richard E.
1993-01-01
Discussion of instructional systems development (ISD) focuses on recent developments in cognitive task analysis and describes the Integrated Task Analysis Model, a framework for integrating cognitive and behavioral task analysis methods within the ISD model. Three components of expertise are analyzed: skills, knowledge, and mental models. (96…
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
A single factor underlies the metabolic syndrome: a confirmatory factor analysis.
Pladevall, Manel; Singal, Bonita; Williams, L Keoki; Brotons, Carlos; Guyer, Heidi; Sadurni, Josep; Falces, Carles; Serrano-Rios, Manuel; Gabriel, Rafael; Shaw, Jonathan E; Zimmet, Paul Z; Haffner, Steven
2006-01-01
Confirmatory factor analysis (CFA) was used to test the hypothesis that the components of the metabolic syndrome are manifestations of a single common factor. Three different datasets were used to test and validate the model. The Spanish and Mauritian studies included 207 men and 203 women and 1,411 men and 1,650 women, respectively. A third analytical dataset including 847 men was obtained from a previously published CFA of a U.S. population. The one-factor model included the metabolic syndrome core components (central obesity, insulin resistance, blood pressure, and lipid measurements). We also tested an expanded one-factor model that included uric acid and leptin levels. Finally, we used CFA to compare the goodness of fit of one-factor models with the fit of two previously published four-factor models. The simplest one-factor model showed the best goodness-of-fit indexes (comparative fit index 1, root mean-square error of approximation 0.00). Comparisons of one-factor with four-factor models in the three datasets favored the one-factor model structure. The selection of variables to represent the different metabolic syndrome components and model specification explained why previous exploratory and confirmatory factor analysis, respectively, failed to identify a single factor for the metabolic syndrome. These analyses support the current clinical definition of the metabolic syndrome, as well as the existence of a single factor that links all of the core components.
Dynamic analysis of Space Shuttle/RMS configuration using continuum approach
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Taylor, Lawrence W., Jr.
1994-01-01
The initial assembly of Space Station Freedom involves the Space Shuttle, its Remote Manipulation System (RMS) and the evolving Space Station Freedom. The dynamics of this coupled system involves both the structural and the control system dynamics of each of these components. The modeling and analysis of such an assembly is made even more formidable by kinematic and joint nonlinearities. The current practice of modeling such flexible structures is to use finite element modeling in which the mass and interior dynamics is ignored between thousands of nodes, for each major component. The model characteristics of only tens of modes are kept out of thousands which are calculated. The components are then connected by approximating the boundary conditions and inserting the control system dynamics. In this paper continuum models are used instead of finite element models because of the improved accuracy, reduced number of model parameters, the avoidance of model order reduction, and the ability to represent the structural and control system dynamics in the same system of equations. Dynamic analysis of linear versions of the model is performed and compared with finite element model results. Additionally, the transfer matrix to continuum modeling is presented.
Principal Component Clustering Approach to Teaching Quality Discriminant Analysis
ERIC Educational Resources Information Center
Xian, Sidong; Xia, Haibo; Yin, Yubo; Zhai, Zhansheng; Shang, Yan
2016-01-01
Teaching quality is the lifeline of the higher education. Many universities have made some effective achievement about evaluating the teaching quality. In this paper, we establish the Students' evaluation of teaching (SET) discriminant analysis model and algorithm based on principal component clustering analysis. Additionally, we classify the SET…
Modeling vertebrate diversity in Oregon using satellite imagery
NASA Astrophysics Data System (ADS)
Cablk, Mary Elizabeth
Vertebrate diversity was modeled for the state of Oregon using a parametric approach to regression tree analysis. This exploratory data analysis effectively modeled the non-linear relationships between vertebrate richness and phenology, terrain, and climate. Phenology was derived from time-series NOAA-AVHRR satellite imagery for the year 1992 using two methods: principal component analysis and derivation of EROS data center greenness metrics. These two measures of spatial and temporal vegetation condition incorporated the critical temporal element in this analysis. The first three principal components were shown to contain spatial and temporal information about the landscape and discriminated phenologically distinct regions in Oregon. Principal components 2 and 3, 6 greenness metrics, elevation, slope, aspect, annual precipitation, and annual seasonal temperature difference were investigated as correlates to amphibians, birds, all vertebrates, reptiles, and mammals. Variation explained for each regression tree by taxa were: amphibians (91%), birds (67%), all vertebrates (66%), reptiles (57%), and mammals (55%). Spatial statistics were used to quantify the pattern of each taxa and assess validity of resulting predictions from regression tree models. Regression tree analysis was relatively robust against spatial autocorrelation in the response data and graphical results indicated models were well fit to the data.
Modeling and Analysis of Mixed Synchronous/Asynchronous Systems
NASA Technical Reports Server (NTRS)
Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan
2012-01-01
Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.
Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.
A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run
Armeanu, Daniel; Lache, Leonard; Panait, Mirela
2017-01-01
The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100
Deineko, Viktor
2006-01-01
Human multisynthetase complex auxiliary component, protein p43 is an endothelial monocyte-activating polypeptide II precursor. In this study, comprehensive sequence analysis of N-terminus has been performed to identify structural domains, motifs, sites of post-translation modification and other functionally important parameters. The spatial structure model of full-chain protein p43 is obtained.
User's manual for the Composite HTGR Analysis Program (CHAP-1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, J.S.; Secker, P.A. Jr.; Vigil, J.C.
1977-03-01
CHAP-1 is the first release version of an HTGR overall plant simulation program with both steady-state and transient solution capabilities. It consists of a model-independent systems analysis program and a collection of linked modules, each representing one or more components of the HTGR plant. Detailed instructions on the operation of the code and detailed descriptions of the HTGR model are provided. Information is also provided to allow the user to easily incorporate additional component modules, to modify or replace existing modules, or to incorporate a completely new simulation model into the CHAP systems analysis framework.
Dynamic competitive probabilistic principal components analysis.
López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel
2009-04-01
We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.
Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.
2005-01-01
An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
Modeling hardwood crown radii using circular data analysis
Paul F. Doruska; Hal O. Liechty; Douglas J. Marshall
2003-01-01
Cylindrical data are bivariate data composed of a linear and an angular component. One can use uniform, first-order (one maximum and one minimum) or second-order (two maxima and two minima) models to relate the linear component to the angular component. Crown radii can be treated as cylindrical data when the azimuths at which the radii are measured are also recorded....
Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis
NASA Astrophysics Data System (ADS)
Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei
2018-01-01
In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.
Deflection Analysis of the Space Shuttle External Tank Door Drive Mechanism
NASA Technical Reports Server (NTRS)
Tosto, Michael A.; Trieu, Bo C.; Evernden, Brent A.; Hope, Drew J.; Wong, Kenneth A.; Lindberg, Robert E.
2008-01-01
Upon observing an abnormal closure of the Space Shuttle s External Tank Doors (ETD), a dynamic model was created in MSC/ADAMS to conduct deflection analyses of the Door Drive Mechanism (DDM). For a similar analysis, the traditional approach would be to construct a full finite element model of the mechanism. The purpose of this paper is to describe an alternative approach that models the flexibility of the DDM using a lumped parameter approximation to capture the compliance of individual parts within the drive linkage. This approach allows for rapid construction of a dynamic model in a time-critical setting, while still retaining the appropriate equivalent stiffness of each linkage component. As a validation of these equivalent stiffnesses, finite element analysis (FEA) was used to iteratively update the model towards convergence. Following this analysis, deflections recovered from the dynamic model can be used to calculate stress and classify each component s deformation as either elastic or plastic. Based on the modeling assumptions used in this analysis and the maximum input forcing condition, two components in the DDM show a factor of safety less than or equal to 0.5. However, to accurately evaluate the induced stresses, additional mechanism rigging information would be necessary to characterize the input forcing conditions. This information would also allow for the classification of stresses as either elastic or plastic.
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
Designers Workbench: Towards Real-Time Immersive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuester, F; Duchaineau, M A; Hamann, B
2001-10-03
This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technologymore » or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.« less
Identifying fluorescent pulp mill effluent in the Gulf of Maine and its watershed
Cawley, Kaelin M.; Butler, Kenna D.; Aiken, George R.; Larsen, Laurel G.; Huntington, Thomas G.; McKnight, Diane M.
2012-01-01
Using fluorescence spectroscopy and parallel factor analysis (PARAFAC) we characterized and modeled the fluorescence properties of dissolved organic matter (DOM) in samples from the Penobscot River, Androscoggin River, Penobscot Bay, and the Gulf of Maine (GoM). We analyzed excitation-emission matrices (EEMs) using an existing PARAFAC model (Cory and McKnight, 2005) and created a system-specific model with seven components (GoM PARAFAC). The GoM PARAFAC model contained six components similar to those in other PARAFAC models and one unique component with a spectrum similar to a residual found using the Cory and McKnight (2005) model. The unique component was abundant in samples from the Androscoggin River immediately downstream of a pulp mill effluent release site. The detection of a PARAFAC component associated with an anthropogenic source of DOM, such as pulp mill effluent, demonstrates the importance for rigorously analyzing PARAFAC residuals and developing system-specific models.
Semi-blind Bayesian inference of CMB map and power spectrum
NASA Astrophysics Data System (ADS)
Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim
2016-04-01
We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
JAMS - a software platform for modular hydrological modelling
NASA Astrophysics Data System (ADS)
Kralisch, Sven; Fischer, Christian
2015-04-01
Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.
On 3-D inelastic analysis methods for hot section components (base program)
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1986-01-01
A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report.
A Feature Fusion Based Forecasting Model for Financial Time Series
Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie
2014-01-01
Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455
Architecting a Simulation Framework for Model Rehosting
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2004-01-01
The utility of vehicle math models extends beyond human-in-the-loop simulation. It is desirable to deploy a given model across a multitude of applications that target design, analysis, and research. However, the vehicle model alone represents an incomplete simulation. One must also replicate the environment models (e.g., atmosphere, gravity, terrain) to achieve identical vehicle behavior across all applications. Environment models are increasing in complexity and represent a substantial investment to re-engineer for a new application. A software component that can be rehosted in each application is one solution to the deployment problem. The component must encapsulate both the vehicle and environment models. The component must have a well-defined interface that abstracts the bulk of the logic to operate the models. This paper examines the characteristics of a rehostable modeling component from the perspective of a human-in-the-loop simulation framework. The Langley Standard Real-Time Simulation in C++ (LaSRS++) is used as an example. LaSRS++ was recently redesigned to transform its modeling package into a rehostable component.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
Beautemps, D; Badin, P; Bailly, G
2001-05-01
The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.
Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS
NASA Astrophysics Data System (ADS)
Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.
2012-07-01
A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.
Computational model for the analysis of cartilage and cartilage tissue constructs
Smith, David W.; Gardiner, Bruce S.; Davidson, John B.; Grodzinsky, Alan J.
2013-01-01
We propose a new non-linear poroelastic model that is suited to the analysis of soft tissues. In this paper the model is tailored to the analysis of cartilage and the engineering design of cartilage constructs. The proposed continuum formulation of the governing equations enables the strain of the individual material components within the extracellular matrix (ECM) to be followed over time, as the individual material components are synthesized, assembled and incorporated within the ECM or lost through passive transport or degradation. The material component analysis developed here naturally captures the effect of time-dependent changes of ECM composition on the deformation and internal stress states of the ECM. For example, it is shown that increased synthesis of aggrecan by chondrocytes embedded within a decellularized cartilage matrix initially devoid of aggrecan results in osmotic expansion of the newly synthesized proteoglycan matrix and tension within the structural collagen network. Specifically, we predict that the collagen network experiences a tensile strain, with a maximum of ~2% at the fixed base of the cartilage. The analysis of an example problem demonstrates the temporal and spatial evolution of the stresses and strains in each component of a self-equilibrating composite tissue construct, and the role played by the flux of water through the tissue. PMID:23784936
NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2010-01-01
The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.
A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON
King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix
2008-01-01
As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development. Currently there is no fully coupled computational tool to analyze this fluid/structure interaction process. The objective of this study was to develop a fully coupled aeroelastic modeling capability to describe the fluid/structure interaction process during the transient nozzle operations. The aeroelastic model composes of three components: the computational fluid dynamics component based on an unstructured-grid, pressure-based computational fluid dynamics formulation, the computational structural dynamics component developed in the framework of modal analysis, and the fluid-structural interface component. The developed aeroelastic model was applied to the transient nozzle startup process of the Space Shuttle Main Engine at sea level. The computed nozzle side loads and the axial nozzle wall pressure profiles from the aeroelastic nozzle are compared with those of the published rigid nozzle results, and the impact of the fluid/structure interaction on nozzle side loads is interrogated and presented.
The solvent component of macromolecular crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weichenberger, Christian X.; Afonine, Pavel V.; Kantardjieff, Katherine
2015-04-30
On average, the mother liquor or solvent and its constituents occupy about 50% of a macromolecular crystal. Ordered as well as disordered solvent components need to be accurately accounted for in modelling and refinement, often with considerable complexity. The mother liquor from which a biomolecular crystal is grown will contain water, buffer molecules, native ligands and cofactors, crystallization precipitants and additives, various metal ions, and often small-molecule ligands or inhibitors. On average, about half the volume of a biomolecular crystal consists of this mother liquor, whose components form the disordered bulk solvent. Its scattering contributions can be exploited in initialmore » phasing and must be included in crystal structure refinement as a bulk-solvent model. Concomitantly, distinct electron density originating from ordered solvent components must be correctly identified and represented as part of the atomic crystal structure model. Herein, are reviewed (i) probabilistic bulk-solvent content estimates, (ii) the use of bulk-solvent density modification in phase improvement, (iii) bulk-solvent models and refinement of bulk-solvent contributions and (iv) modelling and validation of ordered solvent constituents. A brief summary is provided of current tools for bulk-solvent analysis and refinement, as well as of modelling, refinement and analysis of ordered solvent components, including small-molecule ligands.« less
On the Extraction of Components and the Applicability of the Factor Model.
ERIC Educational Resources Information Center
Dziuban, Charles D.; Harris, Chester W.
A reanalysis of Shaycroft's matrix of intercorrelations of 10 test variables plus 4 random variables is discussed. Three different procedures were used in the reanalysis: (1) Image Component Analysis, (2) Uniqueness Rescaling Factor Analysis, and (3) Alpha Factor Analysis. The results of these analyses are presented in tables. It is concluded from…
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
Orbital transfer rocket engine technology 7.5K-LB thrust rocket engine preliminary design
NASA Technical Reports Server (NTRS)
Harmon, T. J.; Roschak, E.
1993-01-01
A preliminary design of an advanced LOX/LH2 expander cycle rocket engine producing 7,500 lbf thrust for Orbital Transfer vehicle missions was completed. Engine system, component and turbomachinery analysis at both on design and off design conditions were completed. The preliminary design analysis results showed engine requirements and performance goals were met. Computer models are described and model outputs are presented. Engine system assembly layouts, component layouts and valve and control system analysis are presented. Major design technologies were identified and remaining issues and concerns were listed.
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Advanced Modeling Strategies for the Analysis of Tile-Reinforced Composite Armor
NASA Technical Reports Server (NTRS)
Davila, Carlos G.; Chen, Tzi-Kang
1999-01-01
A detailed investigation of the deformation mechanisms in tile-reinforced armored components was conducted to develop the most efficient modeling strategies for the structural analysis of large components of the Composite Armored Vehicle. The limitations of conventional finite elements with respect to the analysis of tile-reinforced structures were examined, and two complementary optimal modeling strategies were developed. These strategies are element layering and the use of a tile-adhesive superelement. Element layering is a technique that uses stacks of shear deformable shell elements to obtain the proper transverse shear distributions through the thickness of the laminate. The tile-adhesive superelement consists of a statically condensed substructure model designed to take advantage of periodicity in tile placement patterns to eliminate numerical redundancies in the analysis. Both approaches can be used simultaneously to create unusually efficient models that accurately predict the global response by incorporating the correct local deformation mechanisms.
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanty, Subhasish; Soppet, William; Majumdar, Saurin
This report provides an update on an assessment of environmentally assisted fatigue for light water reactor components under extended service conditions. This report is a deliverable in April 2015 under the work package for environmentally assisted fatigue under DOE's Light Water Reactor Sustainability program. In this report, updates are discussed related to a system level preliminary finite element model of a two-loop pressurized water reactor (PWR). Based on this model, system-level heat transfer analysis and subsequent thermal-mechanical stress analysis were performed for typical design-basis thermal-mechanical fatigue cycles. The in-air fatigue lives of components, such as the hot and cold legs,more » were estimated on the basis of stress analysis results, ASME in-air fatigue life estimation criteria, and fatigue design curves. Furthermore, environmental correction factors and associated PWR environment fatigue lives for the hot and cold legs were estimated by using estimated stress and strain histories and the approach described in NUREG-6909. The discussed models and results are very preliminary. Further advancement of the discussed model is required for more accurate life prediction of reactor components. This report only presents the work related to finite element modelling activities. However, in between multiple tensile and fatigue tests were conducted. The related experimental results will be presented in the year-end report.« less
Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates
NASA Astrophysics Data System (ADS)
Liu, Bin; King, Matt; Dai, Wujiao
2018-05-01
Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Scenario Analysis: An Integrative Study and Guide to Implementation in the United States Air Force
1994-09-01
Environmental Analysis ................................ 3-3 Classifications of Environments ......................... 3-5 Characteristics of... Environments ........................ 3-8 iii Page Components of the Environmental Analysis Process ........... 3-12 Forecasting... Environmental Analysis ...................... 3-4 3-2 Model of the Industry Environment ......................... 3-6 3-3 Model of Macroenvironment
Evaluation of RCAS Inflow Models for Wind Turbine Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tangler, J.; Bir, G.
The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.
Assessing factorial invariance of two-way rating designs using three-way methods
Kroonenberg, Pieter M.
2015-01-01
Assessing the factorial invariance of two-way rating designs such as ratings of concepts on several scales by different groups can be carried out with three-way models such as the Parafac and Tucker models. By their definitions these models are double-metric factorially invariant. The differences between these models lie in their handling of the links between the concept and scale spaces. These links may consist of unrestricted linking (Tucker2 model), invariant component covariances but variable variances per group and per component (Parafac model), zero covariances and variances different per group but not per component (Replicated Tucker3 model) and strict invariance (Component analysis on the average matrix). This hierarchy of invariant models, and the procedures by which to evaluate the models against each other, is illustrated in some detail with an international data set from attachment theory. PMID:25620936
NASA Astrophysics Data System (ADS)
Orlando, Elena
2016-04-01
Galactic synchrotron radiation observed from radio to microwaves is produced by cosmic-ray (CR) electrons propagating in magnetic fields (B-fields). The low-frequency foreground component separated maps by WMAP and Planck depend on the assumed synchrotron spectrum. The synchrotron spectrum varies for different line of sights as a result of changes on the CR spectrum due to propagation effects and source distributions. Our present knowledge of the CR spectrum at different locations in the Galaxy is not sufficient to distinguish various possibilities in the modeling. As a consequence uncertainties on synchrotron emission models complicate the foreground component separation analysis with Planck and future microwave telescopes. Hence, any advancement in synchrotron modeling is important for separating the different foreground components.The first step towards a more comprehensive understanding of degeneracy and correlation among the synchrotron model parameters is outlined in our Strong et al. 2011 and Orlando et al. 2013 papers. In the latter the conclusion was that CR spectrum, propagation models, B-fields, and foreground component separation analysis need to be studied simultaneously in order to properly obtain and interpret the synchrotron foreground. Indeed for the officially released Planck maps, we use only the best spectral model from our above paper for the component separation analysis.Here we present a collections of our latest results on synchrotron, CRs and B-fields in the context of CR propagation, showing also our recent work on B-fields within the Planck Collaboration. We underline also the importance of using the constraints on CRs that we obtain from gamma ray observations. Methods and perspectives for further studies on the synchrotron foreground will be addressed.
Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model
NASA Astrophysics Data System (ADS)
Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.
2017-12-01
This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.
Thermal Analysis of Iodine Satellite (iSAT)
NASA Technical Reports Server (NTRS)
Mauro, Stephanie
2015-01-01
This paper presents the progress of the thermal analysis and design of the Iodine Satellite (iSAT). The purpose of the iSAT spacecraft (SC) is to demonstrate the ability of the iodine Hall Thruster propulsion system throughout a one year mission in an effort to mature the system for use on future satellites. The benefit of this propulsion system is that it uses a propellant, iodine, that is easy to store and provides a high thrust-to-mass ratio. The spacecraft will also act as a bus for an earth observation payload, the Long Wave Infrared (LWIR) Camera. Four phases of the mission, determined to either be critical to achieving requirements or phases of thermal concern, are modeled. The phases are the Right Ascension of the Ascending Node (RAAN) Change, Altitude Reduction, De-Orbit, and Science Phases. Each phase was modeled in a worst case hot environment and the coldest phase, the Science Phase, was also modeled in a worst case cold environment. The thermal environments of the spacecraft are especially important to model because iSAT has a very high power density. The satellite is the size of a 12 unit cubesat, and dissipates slightly more than 75 Watts of power as heat at times. The maximum temperatures for several components are above their maximum operational limit for one or more cases. The analysis done for the first Design and Analysis Cycle (DAC1) showed that many components were above or within 5 degrees Centigrade of their maximum operation limit. The battery is a component of concern because although it is not over its operational temperature limit, efficiency greatly decreases if it operates at the currently predicted temperatures. In the second Design and Analysis Cycle (DAC2), many steps were taken to mitigate the overheating of components, including isolating several high temperature components, removal of components, and rearrangement of systems. These changes have greatly increased the thermal margin available.
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Psychometric Measurement Models and Artificial Neural Networks
ERIC Educational Resources Information Center
Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.
2004-01-01
The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…
Analyzing the Impact of a Data Analysis Process to Improve Instruction Using a Collaborative Model
ERIC Educational Resources Information Center
Good, Rebecca B.
2006-01-01
The Data Collaborative Model (DCM) assembles assessment literacy, reflective practices, and professional development into a four-component process. The sub-components include assessing students, reflecting over data, professional dialogue, professional development for the teachers, interventions for students based on data results, and re-assessing…
Function modeling: improved raster analysis through delayed reading and function raster datasets
John S. Hogland; Nathaniel M. Anderson; J .Greg Jones
2013-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...
User's guide for GSMP, a General System Modeling Program. [In PL/I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J. M.
1979-10-01
GSMP is designed for use by systems analysis teams. Given compiled subroutines that model the behavior of components plus instructions as to how they are to be interconnected, this program links them together to model a complete system. GSMP offers a fast response to management requests for reconfigurations of old systems and even initial configurations of new systems. Standard system-analytic services are provided: parameter sweeps, graphics, free-form input and formatted output, file storage and recovery, user-tested error diagnostics, component model and integration checkout and debugging facilities, sensitivity analysis, and a multimethod optimizer with nonlinear constraint handling capability. Steady-state or cyclicmore » time-dependence is simulated directly, initial-value problems only indirectly. The code is written in PL/I, but interfaces well with FORTRAN component models. Over the last five years GSMP has been used to model theta-pinch, tokamak, and heavy-ion fusion power plants, open- and closed-cycle magneto-hydrodynamic power plants, and total community energy systems.« less
Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury.
Cheng, I; Zhang, L; Xu, X
2016-02-09
Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40-61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found.
Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury
Cheng, I.; Zhang, L.; Xu, X.
2016-01-01
Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40–61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found. PMID:26857835
Analysis and improvement measures of flight delay in China
NASA Astrophysics Data System (ADS)
Zang, Yuhang
2017-03-01
Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.
Wind modeling and lateral control for automatic landing
NASA Technical Reports Server (NTRS)
Holley, W. E.; Bryson, A. E., Jr.
1975-01-01
For the purposes of aircraft control system design and analysis, the wind can be characterized by a mean component which varies with height and by turbulent components which are described by the von Karman correlation model. The aircraft aero-dynamic forces and moments depend linearly on uniform and gradient gust components obtained by averaging over the aircraft's length and span. The correlations of the averaged components are then approximated by the outputs of linear shaping filters forced by white noise. The resulting model of the crosswind shear and turbulence effects is used in the design of a lateral control system for the automatic landing of a DC-8 aircraft.
McSherry, Wilfred
2006-07-01
The aim of this study was to generate a deeper understanding of the factors and forces that may inhibit or advance the concepts of spirituality and spiritual care within both nursing and health care. This manuscript presents a model that emerged from a qualitative study using grounded theory. Implementation and use of this model may assist all health care practitioners and organizations to advance the concepts of spirituality and spiritual care within their own sphere of practice. The model has been termed the principal components model because participants identified six components as being crucial to the advancement of spiritual health care. Grounded theory was used meaning that there was concurrent data collection and analysis. Theoretical sampling was used to develop the emerging theory. These processes, along with data analysis, open, axial and theoretical coding led to the identification of a core category and the construction of the principal components model. Fifty-three participants (24 men and 29 women) were recruited and all consented to be interviewed. The sample included nurses (n=24), chaplains (n=7), a social worker (n=1), an occupational therapist (n=1), physiotherapists (n=2), patients (n=14) and the public (n=4). The investigation was conducted in three phases to substantiate the emerging theory and the development of the model. The principal components model contained six components: individuality, inclusivity, integrated, inter/intra-disciplinary, innate and institution. A great deal has been written on the concepts of spirituality and spiritual care. However, rhetoric alone will not remove some of the intrinsic and extrinsic barriers that are inhibiting the advancement of the spiritual dimension in terms of theory and practice. An awareness of and adherence to the principal components model may assist nurses and health care professionals to engage with and overcome some of the structural, organizational, political and social variables that are impacting upon spiritual care.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
An oilspill trajectory analysis model with a variable wind deflection angle
Samuels, W.B.; Huang, N.E.; Amstutz, D.E.
1982-01-01
The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.
Fire flame detection based on GICA and target tracking
NASA Astrophysics Data System (ADS)
Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian
2013-04-01
To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-10-01
It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.
Ground resonance analysis using a substructure modeling approach
NASA Technical Reports Server (NTRS)
Chen, S.-Y.; Berman, A.; Austin, E. E.
1984-01-01
A convenient and versatile procedure for modeling and analyzing ground resonance phenomena is described and illustrated. A computer program is used which dynamically couples differential equations with nonlinear and time dependent coefficients. Each set of differential equations may represent a component such as a rotor, fuselage, landing gear, or a failed damper. Arbitrary combinations of such components may be formulated into a model of a system. When the coupled equations are formed, a procedure is executed which uses a Floquet analysis to determine the stability of the system. Illustrations of the use of the procedures along with the numerical examples are presented.
Ground resonance analysis using a substructure modeling approach
NASA Technical Reports Server (NTRS)
Chen, S. Y.; Austin, E. E.; Berman, A.
1985-01-01
A convenient and versatile procedure for modeling and analyzing ground resonance phenomena is described and illustrated. A computer program is used which dynamically couples differential equations with nonlinear and time dependent coefficients. Each set of differential equations may represent a component such as a rotor, fuselage, landing gear, or a failed damper. Arbitrary combinations of such components may be formulated into a model of a system. When the coupled equations are formed, a procedure is executed which uses a Floquet analysis to determine the stability of the system. Illustrations of the use of the procedures along with the numerical examples are presented.
Nesakumar, Noel; Baskar, Chanthini; Kesavan, Srinivasan; Rayappan, John Bosco Balaguru; Alwarappan, Subbiah
2018-05-22
The moisture content of beetroot varies during long-term cold storage. In this work, we propose a strategy to identify the moisture content and age of beetroot using principal component analysis coupled Fourier transform infrared spectroscopy (FTIR). Frequent FTIR measurements were recorded directly from the beetroot sample surface over a period of 34 days for analysing its moisture content employing attenuated total reflectance in the spectral ranges of 2614-4000 and 1465-1853 cm -1 with a spectral resolution of 8 cm -1 . In order to estimate the transmittance peak height (T p ) and area under the transmittance curve [Formula: see text] over the spectral ranges of 2614-4000 and 1465-1853 cm -1 , Gaussian curve fitting algorithm was performed on FTIR data. Principal component and nonlinear regression analyses were utilized for FTIR data analysis. Score plot over the ranges of 2614-4000 and 1465-1853 cm -1 allowed beetroot quality discrimination. Beetroot quality predictive models were developed by employing biphasic dose response function. Validation experiment results confirmed that the accuracy of the beetroot quality predictive model reached 97.5%. This research work proves that FTIR spectroscopy in combination with principal component analysis and beetroot quality predictive models could serve as an effective tool for discriminating moisture content in fresh, half and completely spoiled stages of beetroot samples and for providing status alerts.
Beyond Principal Component Analysis: A Trilinear Decomposition Model and Least Squares Estimation.
ERIC Educational Resources Information Center
Pham, Tuan Dinh; Mocks, Joachim
1992-01-01
Sufficient conditions are derived for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis. The limiting covariance matrix is computed. (Author/SLD)
2011-01-01
Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific can be interpreted as a sub-mode and retained for further analysis to identify potential biomarkers. As opposed to standard matrix factorization methods this can be achieved on a sample (experiment)-by-sample basis. Postulating one or more components with indifferent features enables their removal from disease and control specific components on a sample-by-sample basis. This yields selected components with reduced complexity and generally, it increases prediction accuracy. PMID:22208882
Walsh, James C.; Angstmann, Christopher N.; Duggin, Iain G.
2017-01-01
The Min protein system creates a dynamic spatial pattern in Escherichia coli cells where the proteins MinD and MinE oscillate from pole to pole. MinD positions MinC, an inhibitor of FtsZ ring formation, contributing to the mid-cell localization of cell division. In this paper, Fourier analysis is used to decompose experimental and model MinD spatial distributions into time-dependent harmonic components. In both experiment and model, the second harmonic component is responsible for producing a mid-cell minimum in MinD concentration. The features of this harmonic are robust in both experiment and model. Fourier analysis reveals a close correspondence between the time-dependent behaviour of the harmonic components in the experimental data and model. Given this, each molecular species in the model was analysed individually. This analysis revealed that membrane-bound MinD dimer shows the mid-cell minimum with the highest contrast when averaged over time, carrying the strongest signal for positioning the cell division ring. This concurs with previous data showing that the MinD dimer binds to MinC inhibiting FtsZ ring formation. These results show that non-linear interactions of Min proteins are essential for producing the mid-cell positioning signal via the generation of second-order harmonic components in the time-dependent spatial protein distribution. PMID:29040283
Research on criticality analysis method of CNC machine tools components under fault rate correlation
NASA Astrophysics Data System (ADS)
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
ERIC Educational Resources Information Center
Marquardt, Lloyd D.; McCormick, Ernest J.
The study involved the use of a structured job analysis instrument called the Position Analysis Questionnaire (PAQ) as the direct basis for the establishment of the job component validity of aptitude tests (that is, a procedure for estimating the aptitude requirements for jobs strictly on the basis of job analysis data). The sample of jobs used…
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.
2015-11-01
The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.
Raut, Savita V; Yadav, Dinkar M
2018-03-28
This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong
2015-08-07
Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
Designing and encoding models for synthetic biology.
Endler, Lukas; Rodriguez, Nicolas; Juty, Nick; Chelliah, Vijayalakshmi; Laibe, Camille; Li, Chen; Le Novère, Nicolas
2009-08-06
A key component of any synthetic biology effort is the use of quantitative models. These models and their corresponding simulations allow optimization of a system design, as well as guiding their subsequent analysis. Once a domain mostly reserved for experts, dynamical modelling of gene regulatory and reaction networks has been an area of growth over the last decade. There has been a concomitant increase in the number of software tools and standards, thereby facilitating model exchange and reuse. We give here an overview of the model creation and analysis processes as well as some software tools in common use. Using markup language to encode the model and associated annotation, we describe the mining of components, their integration in relational models, formularization and parametrization. Evaluation of simulation results and validation of the model close the systems biology 'loop'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, C.H.; Ready, A.B.; Rea, J.
1995-06-01
Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less
A Nonlinear Model for Gene-Based Gene-Environment Interaction.
Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua
2016-06-04
A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.
Energy Efficient Engine Low Pressure Subsystem Flow Analysis
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Lynn, Sean R.; Heidegger, Nathan J.; Delaney, Robert A.
1998-01-01
The objective of this project is to provide the capability to analyze the aerodynamic performance of the complete low pressure subsystem (LPS) of the Energy Efficient Engine (EEE). The analyses were performed using three-dimensional Navier-Stokes numerical models employing advanced clustered processor computing platforms. The analysis evaluates the impact of steady aerodynamic interaction effects between the components of the LPS at design and off-design operating conditions. Mechanical coupling is provided by adjusting the rotational speed of common shaft-mounted components until a power balance is achieved. The Navier-Stokes modeling of the complete low pressure subsystem provides critical knowledge of component aero/mechanical interactions that previously were unknown to the designer until after hardware testing.
Energy Efficient Engine Low Pressure Subsystem Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Lynn, Sean R.; Veres, Joseph P.
1998-01-01
The objective of this study was to demonstrate the capability to analyze the aerodynamic performance of the complete low pressure subsystem (LPS) of the Energy Efficient Engine (EEE). Detailed analyses were performed using three- dimensional Navier-Stokes numerical models employing advanced clustered processor computing platforms. The analysis evaluates the impact of steady aerodynamic interaction effects between the components of the LPS at design and off- design operating conditions. Mechanical coupling is provided by adjusting the rotational speed of common shaft-mounted components until a power balance is achieved. The Navier-Stokes modeling of the complete low pressure subsystem provides critical knowledge of component acro/mechanical interactions that previously were unknown to the designer until after hardware testing.
Analysis of Free Modeling Predictions by RBO Aleph in CASP11
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2015-01-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue–residue contact prediction by EPC-map and contact–guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. PMID:26492194
Wang, Jinjia; Zhang, Yanna
2015-02-01
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Sensitivity analysis of key components in large-scale hydroeconomic models
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.
2008-12-01
This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.
Isotropic vs. anisotropic components of BAO data: a tool for model selection
NASA Astrophysics Data System (ADS)
Haridasu, Balakrishna S.; Luković, Vladimir V.; Vittorio, Nicola
2018-05-01
We conduct a selective analysis of the isotropic (DV) and anisotropic (AP) components of the most recent Baryon Acoustic Oscillations (BAO) data. We find that these components provide significantly different constraints and could provide strong diagnostics for model selection, also in view of more precise data to arrive. For instance, in the ΛCDM model we find a mild tension of ~ 2 σ for the Ωm estimates obtained using DV and AP separately. Considering both Ωk and w as free parameters, we find that the concordance model is in tension with the best-fit values provided by the BAO data alone at 2.2σ. We complemented the BAO data with the Supernovae Ia (SNIa) and Observational Hubble datasets to perform a joint analysis on the ΛCDM model and its standard extensions. By assuming ΛCDM scenario, we find that these data provide H0 = 69.4 ± 1.7 km/s Mpc‑1 as the best-fit value for the present expansion rate. In the kΛCDM scenario we find that the evidence for acceleration using the BAO data alone is more than ~ 5.8σ, which increases to 8.4 σ in our joint analysis.
NASA Astrophysics Data System (ADS)
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Viscoplastic analysis of an experimental cylindrical thrust chamber liner
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Arnold, Steven M.
1991-01-01
A viscoplastic stress-strain analysis of an experimental cylindrical thrust chamber is presented. A viscoelastic constitutive model incorporating a single internal state variable that represents kinematic hardening was employed to investigate whether such a viscoplastic model could predict the experimentally observed behavior of the thrust chamber. Two types of loading cycles were considered: a short cycle of 3.5 sec. duration that corresponded to the experiments, and an extended loading cycle of 485.1 sec. duration that is typical of the Space Shuttle Main Engine (SSME) operating cycle. The analysis qualitatively replicated the deformation behavior of the component as observed in experiments designed to simulate SSME operating conditions. The analysis also showed that the mode and location in the component may depend on the loading cycle. The results indicate that using viscoplastic models for structural analysis can lead to a more realistic life assessment of thrust chambers.
OCSEGen: Open Components and Systems Environment Generator
NASA Technical Reports Server (NTRS)
Tkachuk, Oksana
2014-01-01
To analyze a large system, one often needs to break it into smaller components.To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1984-01-01
A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.
Isolating the anthropogenic component of Arctic warming
Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...
2014-05-28
Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less
Stirling engine - Approach for long-term durability assessment
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Bartolotta, Paul A.; Halford, Gary R.; Freed, Alan D.
1992-01-01
The approach employed by NASA Lewis for the long-term durability assessment of the Stirling engine hot-section components is summarized. The approach consists of: preliminary structural assessment; development of a viscoplastic constitutive model to accurately determine material behavior under high-temperature thermomechanical loads; an experimental program to characterize material constants for the viscoplastic constitutive model; finite-element thermal analysis and structural analysis using a viscoplastic constitutive model to obtain stress/strain/temperature at the critical location of the hot-section components for life assessment; and development of a life prediction model applicable for long-term durability assessment at high temperatures. The approach should aid in the provision of long-term structural durability and reliability of Stirling engines.
Dynamics of Rotating Multi-component Turbomachinery Systems
NASA Technical Reports Server (NTRS)
Lawrence, Charles
1993-01-01
The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
Systems Engineering Models and Tools | Wind | NREL
(tm)) that provides wind turbine and plant engineering and cost models for holistic system analysis turbine/component models and wind plant analysis models that the systems engineering team produces. If you integrated modeling of wind turbines and plants. It provides guidance for overall wind turbine and plant
NASA Astrophysics Data System (ADS)
Hristian, L.; Ostafe, M. M.; Manea, L. R.; Apostol, L. L.
2017-06-01
The work pursued the distribution of combed wool fabrics destined to manufacturing of external articles of clothing in terms of the values of durability and physiological comfort indices, using the mathematical model of Principal Component Analysis (PCA). Principal Components Analysis (PCA) applied in this study is a descriptive method of the multivariate analysis/multi-dimensional data, and aims to reduce, under control, the number of variables (columns) of the matrix data as much as possible to two or three. Therefore, based on the information about each group/assortment of fabrics, it is desired that, instead of nine inter-correlated variables, to have only two or three new variables called components. The PCA target is to extract the smallest number of components which recover the most of the total information contained in the initial data.
Receiver function analysis applied to refraction survey data
NASA Astrophysics Data System (ADS)
Subaru, T.; Kyosuke, O.; Hitoshi, M.
2008-12-01
For the estimation of the thickness of oceanic crust or petrophysical investigation of subsurface material, refraction or reflection seismic exploration is one of the methods frequently practiced. These explorations use four-component (x,y,z component of acceleration and pressure) seismometer, but only compressional wave or vertical component of seismometers tends to be used in the analyses. Hence, it is needed to use shear wave or lateral component of seismograms for more precise investigation to estimate the thickness of oceanic crust. Receiver function is a function at a place that can be used to estimate the depth of velocity interfaces by receiving waves from teleseismic signal including shear wave. Receiver function analysis uses both vertical and horizontal components of seismograms and deconvolves the horizontal with the vertical to estimate the spectral difference of P-S converted waves arriving after the direct P wave. Once the phase information of the receiver function is obtained, then one can estimate the depth of the velocity interface. This analysis has advantage in the estimation of the depth of velocity interface including Mohorovicic discontinuity using two components of seismograms when P-to-S converted waves are generated at the interface. Our study presents results of the preliminary study using synthetic seismograms. First, we use three types of geological models that are composed of a single sediment layer, a crust layer, and a sloped Moho, respectively, for underground sources. The receiver function can estimate the depth and shape of Moho interface precisely for the three models. Second, We applied this method to synthetic refraction survey data generated not by earthquakes but by artificial sources on the ground or sea surface. Compressional seismic waves propagate under the velocity interface and radiate converted shear waves as well as at the other deep underground layer interfaces. However, the receiver function analysis applied to the second model cannot clearly estimate the velocity interface behind S-P converted wave or multi-reflected waves in a sediment layer. One of the causes is that the incidence angles of upcoming waves are too large compared to the underground source model due to the slanted interface. As a result, incident converted shear waves have non-negligible energy contaminating the vertical component of seismometers. Therefore, recorded refraction waves need to be transformed from depth-lateral coordinate into radial-tangential coordinate, and then Ps converted waves can be observed clearly. Finally, we applied the receiver function analysis to a more realistic model. This model has not only similar sloping Mohorovicic discontinuity and surface source locations as second model but the surface water layer. Receivers are aligned on the sea bottom (OBS; Ocean Bottom Seismometer survey case) Due to intricately bounced reflections, simulated seismic section becomes more complex than the other previously-mentioned models. In spite of the complexity in the seismic records, we could pick up the refraction waves from Moho interface, after stacking more than 20 receiver functions independently produced from each shot gather. After these processing, the receiver function analysis is justified as a method to estimate the depths of velocity interfaces and would be the applicable method for refraction wave analysis. The further study will be conducted for more realistic model that contain inhomogeneous sediment model, for example, and finally used in the inversion of the depth of velocity interfaces like Moho.
A finite element model of the human head for auditory bone conduction simulation.
Taschke, Henning; Hudde, Herbert
2006-01-01
In order to investigate the mechanisms of bone conduction, a finite element model of the human head was developed. The most important steps of the modelling process are described. The model was excited by means of percutaneously applied forces in order to get a deeper insight into the way the parts of the peripheral hearing organ and the surrounding tissue vibrate. The analysis is done based on the division of the bone conduction mechanisms into components. The frequency-dependent patterns of vibration of the components are analyzed. Furthermore, the model allows for the calculation of the contribution of each component to the overall bone-conducted sound. The components interact in a complicated way, which strongly depends on the nature of the excitation and the spatial region to which it is applied.
NASA Astrophysics Data System (ADS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; McCleary, S. L.
1991-05-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
NASA Technical Reports Server (NTRS)
Davis, D. D., Jr.; Krishnamurthy, T.; Stroud, W. J.; Mccleary, S. L.
1991-01-01
State-of-the-art nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain-displacement relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroh, K.R.
1980-01-01
The Composite HTGR Analysis Program (CHAP) consists of a model-independent systems analysis mainframe named LASAN and model-dependent linked code modules, each representing a component, subsystem, or phenomenon of an HTGR plant. The Fort St. Vrain (FSV) version (CHAP-2) includes 21 coded modules that model the neutron kinetics and thermal response of the core; the thermal-hydraulics of the reactor primary coolant system, secondary steam supply system, and balance-of-plant; the actions of the control system and plant protection system; the response of the reactor building; and the relative hazard resulting from fuel particle failure. FSV steady-state and transient plant data are beingmore » used to partially verify the component modeling and dynamic smulation techniques used to predict plant response to postulated accident sequences.« less
Strategic analysis for safeguards systems: a feasibility study. Volume 2. Appendix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldman, A J
1984-12-01
This appendix provides detailed information regarding game theory (strategic analysis) and its potential role in safeguards to supplement the main body of this report. In particualr, it includes an extensive, though not comprehensive review of literature on game theory and on other topics that relate to the formulation of a game-theoretic model (e.g. the payoff functions). The appendix describes the basic form and components of game theory models, and the solvability of various models. It then discusses three basic issues related to the use of strategic analysis in material accounting: (1) its understandability; (2) its viability in regulatory settings; andmore » (3) difficulties in the use of mixed strategies. Each of the components of a game theoretic model are then discussed and related to the present context.« less
Modeling Hydraulic Components for Automated FMEA of a Braking System
2014-12-23
Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the
2015-01-01
Cell membrane chromatography (CMC) derived from pathological tissues is ideal for screening specific components acting on specific diseases from complex medicines owing to the maximum simulation of in vivo drug-receptor interactions. However, there are no pathological tissue-derived CMC models that have ever been developed, as well as no visualized affinity comparison of potential active components between normal and pathological CMC columns. In this study, a novel comparative normal/failing rat myocardium CMC analysis system based on online column selection and comprehensive two-dimensional (2D) chromatography/monolithic column/time-of-flight mass spectrometry was developed for parallel comparison of the chromatographic behaviors on both normal and pathological CMC columns, as well as rapid screening of the specific therapeutic agents that counteract doxorubicin (DOX)-induced heart failure from Acontium carmichaeli (Fuzi). In total, 16 potential active alkaloid components with similar structures in Fuzi were retained on both normal and failing myocardium CMC models. Most of them had obvious decreases of affinities on failing myocardium CMC compared with normal CMC model except for four components, talatizamine (TALA), 14-acetyl-TALA, hetisine, and 14-benzoylneoline. One compound TALA with the highest affinity was isolated for further in vitro pharmacodynamic validation and target identification to validate the screen results. Voltage-dependent K+ channel was confirmed as a binding target of TALA and 14-acetyl-TALA with high affinities. The online high throughput comparative CMC analysis method is suitable for screening specific active components from herbal medicines by increasing the specificity of screened results and can also be applied to other biological chromatography models. PMID:24731167
Kourgialas, Nektarios N; Dokou, Zoi; Karatzas, George P
2015-05-01
The purpose of this study was to create a modeling management tool for the simulation of extreme flow events under current and future climatic conditions. This tool is a combination of different components and can be applied in complex hydrogeological river basins, where frequent flood and drought phenomena occur. The first component is the statistical analysis of the available hydro-meteorological data. Specifically, principal components analysis was performed in order to quantify the importance of the hydro-meteorological parameters that affect the generation of extreme events. The second component is a prediction-forecasting artificial neural network (ANN) model that simulates, accurately and efficiently, river flow on an hourly basis. This model is based on a methodology that attempts to resolve a very difficult problem related to the accurate estimation of extreme flows. For this purpose, the available measurements (5 years of hourly data) were divided in two subsets: one for the dry and one for the wet periods of the hydrological year. This way, two ANNs were created, trained, tested and validated for a complex Mediterranean river basin in Crete, Greece. As part of the second management component a statistical downscaling tool was used for the creation of meteorological data according to the higher and lower emission climate change scenarios A2 and B1. These data are used as input in the ANN for the forecasting of river flow for the next two decades. The final component is the application of a meteorological index on the measured and forecasted precipitation and flow data, in order to assess the severity and duration of extreme events. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Understanding software faults and their role in software reliability modeling
NASA Technical Reports Server (NTRS)
Munson, John C.
1994-01-01
This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
Respiratory protective device design using control system techniques
NASA Technical Reports Server (NTRS)
Burgess, W. A.; Yankovich, D.
1972-01-01
The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.
New methodologies for multi-scale time-variant reliability analysis of complex lifeline networks
NASA Astrophysics Data System (ADS)
Kurtz, Nolan Scot
The cost of maintaining existing civil infrastructure is enormous. Since the livelihood of the public depends on such infrastructure, its state must be managed appropriately using quantitative approaches. Practitioners must consider not only which components are most fragile to hazard, e.g. seismicity, storm surge, hurricane winds, etc., but also how they participate on a network level using network analysis. Focusing on particularly damaged components does not necessarily increase network functionality, which is most important to the people that depend on such infrastructure. Several network analyses, e.g. S-RDA, LP-bounds, and crude-MCS, and performance metrics, e.g. disconnection bounds and component importance, are available for such purposes. Since these networks are existing, the time state is also important. If networks are close to chloride sources, deterioration may be a major issue. Information from field inspections may also have large impacts on quantitative models. To address such issues, hazard risk analysis methodologies for deteriorating networks subjected to seismicity, i.e. earthquakes, have been created from analytics. A bridge component model has been constructed for these methodologies. The bridge fragilities, which were constructed from data, required a deeper level of analysis as these were relevant for specific structures. Furthermore, chloride-induced deterioration network effects were investigated. Depending on how mathematical models incorporate new information, many approaches are available, such as Bayesian model updating. To make such procedures more flexible, an adaptive importance sampling scheme was created for structural reliability problems. Additionally, such a method handles many kinds of system and component problems with singular or multiple important regions of the limit state function. These and previously developed analysis methodologies were found to be strongly sensitive to the network size. Special network topologies may be more or less computationally difficult, while the resolution of the network also has large affects. To take advantage of some types of topologies, network hierarchical structures with super-link representation have been used in the literature to increase the computational efficiency by analyzing smaller, densely connected networks; however, such structures were based on user input and subjective at times. To address this, algorithms must be automated and reliable. These hierarchical structures may indicate the structure of the network itself. This risk analysis methodology has been expanded to larger networks using such automated hierarchical structures. Component importance is the most important objective from such network analysis; however, this may only provide the information of which bridges to inspect/repair earliest and little else. High correlations influence such component importance measures in a negative manner. Additionally, a regional approach is not appropriately modelled. To investigate a more regional view, group importance measures based on hierarchical structures have been created. Such structures may also be used to create regional inspection/repair approaches. Using these analytical, quantitative risk approaches, the next generation of decision makers may make both component and regional-based optimal decisions using information from both network function and further effects of infrastructure deterioration.
Multi-Body Analysis of a Tiltrotor Configuration
NASA Technical Reports Server (NTRS)
Ghiringhelli, G. L.; Masarati, P.; Mantegazza, P.; Nixon, M. W.
1997-01-01
The paper describes the aeroelastic analysis of a tiltrotor configuration. The 1/5 scale wind tunnel semispan model of the V-22 tiltrotor aircraft is considered. The analysis is performed by means of a multi-body code, based on an original formulation. The differential equilibrium problem is stated in terms of first order differential equations. The equilibrium equations of every rigid body are written, together with the definitions of the momenta. The bodies are connected by kinematic constraints, applied in form of Lagrangian multipliers. Deformable components are mainly modelled by means of beam elements, based on an original finite volume formulation. Multi-disciplinar problems can be solved by adding user-defined differential equations. In the presented analysis the equations related to the control of the swash-plate of the model are considered. Advantages of a multi-body aeroelastic code over existing comprehensive rotorcraft codes include the exact modelling of the kinematics of the hub, the detailed modelling of the flexibility of critical hub components, and the possibility to simulate steady flight conditions as well as wind-up and maneuvers. The simulations described in the paper include: 1) the analysis of the aeroelastic stability, with particular regard to the proprotor/pylon instability that is peculiar to tiltrotors, 2) the determination of the dynamic behavior of the system and of the loads due to typical maneuvers, with particular regard to the conversion from helicopter to airplane mode, and 3) the stress evaluation in critical components, such as the pitch links and the conversion downstop spring.
Stereo-tomography in triangulated models
NASA Astrophysics Data System (ADS)
Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai
2018-04-01
Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.
Tan, Peng; Zhang, Hai-Zhu; Zhang, Ding-Kun; Wu, Shan-Na; Niu, Ming; Wang, Jia-Bo; Xiao, Xiao-He
2017-07-01
This study attempts to evaluate the quality of Chinese formula granules by combined use of multi-component simultaneous quantitative analysis and bioassay. The rhubarb dispensing granules were used as the model drug for demonstrative study. The ultra-high performance liquid chromatography (UPLC) method was adopted for simultaneously quantitative determination of the 10 anthraquinone derivatives (such as aloe emodin-8-O-β-D-glucoside) in rhubarb dispensing granules; purgative biopotency of different batches of rhubarb dispensing granules was determined based on compound diphenoxylate tablets-induced mouse constipation model; blood activating biopotency of different batches of rhubarb dispensing granules was determined based on in vitro rat antiplatelet aggregation model; SPSS 22.0 statistical software was used for correlation analysis between 10 anthraquinone derivatives and purgative biopotency, blood activating biopotency. The results of multi-components simultaneous quantitative analysisshowed that there was a great difference in chemical characterizationand certain differences inpurgative biopotency and blood activating biopotency among 10 batches of rhubarb dispensing granules. The correlation analysis showed that the intensity of purgative biopotency was significantly correlated with the content of conjugated anthraquinone glycosides (P<0.01), and the intensity of blood activating biopotency was significantly correlated with the content of free anthraquinone (P<0.01). In summary, the combined use of multi-component simultaneous quantitative analysis and bioassay can achieve objective quantification and more comprehensive reflection on overall quality difference among different batches of rhubarb dispensing granules. Copyright© by the Chinese Pharmaceutical Association.
Probabilistic/Fracture-Mechanics Model For Service Life
NASA Technical Reports Server (NTRS)
Watkins, T., Jr.; Annis, C. G., Jr.
1991-01-01
Computer program makes probabilistic estimates of lifetime of engine and components thereof. Developed to fill need for more accurate life-assessment technique that avoids errors in estimated lives and provides for statistical assessment of levels of risk created by engineering decisions in designing system. Implements mathematical model combining techniques of statistics, fatigue, fracture mechanics, nondestructive analysis, life-cycle cost analysis, and management of engine parts. Used to investigate effects of such engine-component life-controlling parameters as return-to-service intervals, stresses, capabilities for nondestructive evaluation, and qualities of materials.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
NASA Technical Reports Server (NTRS)
Gao, Shou-Ting; Ping, Fan; Li, Xiao-Fan; Tao, Wei-Kuo
2004-01-01
Although dry/moist potential vorticity is a useful physical quantity for meteorological analysis, it cannot be applied to the analysis of 2D simulations. A convective vorticity vector (CVV) is introduced in this study to analyze 2D cloud-resolving simulation data associated with 2D tropical convection. The cloud model is forced by the vertical velocity, zonal wind, horizontal advection, and sea surface temperature obtained from the TOGA COARE, and is integrated for a selected 10-day period. The CVV has zonal and vertical components in the 2D x-z frame. Analysis of zonally-averaged and mass-integrated quantities shows that the correlation coefficient between the vertical component of the CVV and the sum of the cloud hydrometeor mixing ratios is 0.81, whereas the correlation coefficient between the zonal component and the sum of the mixing ratios is only 0.18. This indicates that the vertical component of the CVV is closely associated with tropical convection. The tendency equation for the vertical component of the CVV is derived and the zonally-averaged and mass-integrated tendency budgets are analyzed. The tendency of the vertical component of the CVV is determined by the interaction between the vorticity and the zonal gradient of cloud heating. The results demonstrate that the vertical component of the CVV is a cloud-linked parameter and can be used to study tropical convection.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
NASA Astrophysics Data System (ADS)
Li, Hui; Hong, Lu-Yao; Zhou, Qing; Yu, Hai-Jie
2015-08-01
The business failure of numerous companies results in financial crises. The high social costs associated with such crises have made people to search for effective tools for business risk prediction, among which, support vector machine is very effective. Several modelling means, including single-technique modelling, hybrid modelling, and ensemble modelling, have been suggested in forecasting business risk with support vector machine. However, existing literature seldom focuses on the general modelling frame for business risk prediction, and seldom investigates performance differences among different modelling means. We reviewed researches on forecasting business risk with support vector machine, proposed the general assisted prediction modelling frame with hybridisation and ensemble (APMF-WHAE), and finally, investigated the use of principal components analysis, support vector machine, random sampling, and group decision, under the general frame in forecasting business risk. Under the APMF-WHAE frame with support vector machine as the base predictive model, four specific predictive models were produced, namely, pure support vector machine, a hybrid support vector machine involved with principal components analysis, a support vector machine ensemble involved with random sampling and group decision, and an ensemble of hybrid support vector machine using group decision to integrate various hybrid support vector machines on variables produced from principle components analysis and samples from random sampling. The experimental results indicate that hybrid support vector machine and ensemble of hybrid support vector machines were able to produce dominating performance than pure support vector machine and support vector machine ensemble.
NASA Astrophysics Data System (ADS)
Perera, Indika U.; Narendran, Nadarajah; Terentyeva, Valeria
2018-04-01
This study investigated the thermal properties of three-dimensional (3-D) printed components with the potential to be used for thermal management in light-emitting diode (LED) applications. Commercially available filament materials with and without a metal filler were characterized with changes to the print orientation. 3-D printed components with an in-plane orientation had >30 % better effective thermal conductivity compared with components printed with a cross-plane orientation. A finite-element analysis was modeled to understand the effective thermal conductivity changes in the 3-D printed components. A simple thermal resistance model was used to estimate the required effective thermal conductivity of the 3-D printed components to be a viable alternative in LED thermal management applications.
Merritt, J S; Burvill, C R; Pandy, M G; Davies, H M S
2006-08-01
The mechanical environment of the distal limb is thought to be involved in the pathogenesis of many injuries, but has not yet been thoroughly described. To determine the forces and moments experienced by the metacarpus in vivo during walking and also to assess the effect of some simplifying assumptions used in analysis. Strains from 8 gauges adhered to the left metacarpus of one horse were recorded in vivo during walking. Two different models - one based upon the mechanical theory of beams and shafts and, the other, based upon a finite element analysis (FEA) - were used to determine the external loads applied at the ends of the bone. Five orthogonal force and moment components were resolved by the analysis. In addition, 2 orthogonal bending moments were calculated near mid-shaft. Axial force was found to be the major loading component and displayed a bi-modal pattern during the stance phase of the stride. The shaft model of the bone showed good agreement with the FEA model, despite making many simplifying assumptions. A 3-dimensional loading scenario was observed in the metacarpus, with axial force being the major component. These results provide an opportunity to validate mathematical (computer) models of the limb. The data may also assist in the formulation of hypotheses regarding the pathogenesis of injuries to the distal limb.
Mirnaghi, Fatemeh S; Soucy, Nicholas; Hollebone, Bruce P; Brown, Carl E
2018-05-19
The characterization of spilled petroleum products in an oil spill is necessary for identifying the spill source, selection of clean-up strategies, and evaluating potential environmental and ecological impacts. Existing standard methods for the chemical characterization of spilled oils are time-consuming due to the lengthy sample preparation for analysis. The main objective of this study is the development of a rapid screening method for the fingerprinting of spilled petroleum products using excitation/emission matrix (EEM) fluorescence spectroscopy, thereby delivering a preliminary evaluation of the petroleum products within hours after a spill. In addition, the developed model can be used for monitoring the changes of aromatic compositions of known spilled oils over time. This study involves establishing a fingerprinting model based on the composition of polycyclic and heterocyclic aromatic hydrocarbons (PAH and HAHs, respectively) of 130 petroleum products at different states of evaporative weathering. The screening model was developed using parallel factor analysis (PARAFAC) of a large EEM dataset. The significant fluorescing components for each sample class were determined. After which, through principal component analysis (PCA), the variation of scores of their modeled factors was discriminated based on the different classes of petroleum products. This model was then validated using gas chromatography-mass spectrometry (GC-MS) analysis. The rapid fingerprinting and the identification of unknown and new spilled oils occurs through matching the spilled product with the products of the developed model. Finally, it was shown that HAH compounds in asphaltene and resins contribute to ≥4-ring PAHs compounds in petroleum products. Copyright © 2018. Published by Elsevier Ltd.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less
Dynamic analysis for shuttle design verification
NASA Technical Reports Server (NTRS)
Fralich, R. W.; Green, C. E.; Rheinfurth, M. H.
1972-01-01
Two approaches that are used for determining the modes and frequencies of space shuttle structures are discussed. The first method, direct numerical analysis, involves finite element mathematical modeling of the space shuttle structure in order to use computer programs for dynamic structural analysis. The second method utilizes modal-coupling techniques of experimental verification made by vibrating only spacecraft components and by deducing modes and frequencies of the complete vehicle from results obtained in the component tests.
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
NASA Astrophysics Data System (ADS)
Valsala, Renu; Govindarajan, Suresh Kumar
2018-06-01
Interaction of various physical, chemical and biological transport processes plays an important role in deciding the fate and migration of contaminants in groundwater systems. In this study, a numerical investigation on the interaction of various transport processes of BTEX in a saturated groundwater system is carried out. In addition, the multi-component dissolution from a residual BTEX source under unsteady flow conditions is incorporated in the modeling framework. The model considers Benzene, Toluene, Ethyl Benzene and Xylene dissolving from the residual BTEX source zone to undergo sorption and aerobic biodegradation within the groundwater aquifer. Spatial concentration profiles of dissolved BTEX components under the interaction of various sorption and biodegradation conditions have been studied. Subsequently, a spatial moment analysis is carried out to analyze the effect of interaction of various transport processes on the total dissolved mass and the mobility of dissolved BTEX components. Results from the present numerical study suggest that the interaction of dissolution, sorption and biodegradation significantly influence the spatial distribution of dissolved BTEX components within the saturated groundwater system. Mobility of dissolved BTEX components is also found to be affected by the interaction of these transport processes.
Toward improved durability in advanced aircraft engine hot sections
NASA Technical Reports Server (NTRS)
Sokolowski, Daniel E. (Editor)
1989-01-01
The conference on durability improvement methods for advanced aircraft gas turbine hot-section components discussed NASA's Hot Section Technology (HOST) project, advanced high-temperature instrumentation for hot-section research, the development and application of combustor aerothermal models, and the evaluation of a data base and numerical model for turbine heat transfer. Also discussed are structural analysis methods for gas turbine hot section components, fatigue life-prediction modeling for turbine hot section materials, and the service life modeling of thermal barrier coatings for aircraft gas turbine engines.
Model reconstruction using POD method for gray-box fault detection
NASA Technical Reports Server (NTRS)
Park, H. G.; Zak, M.
2003-01-01
This paper describes using Proper Orthogonal Decomposition (POD) method to create low-order dynamical models for the Model Filter component of Beacon-based Exception Analysis for Multi-missions (BEAM).
Simulation of multispectral multisource for device of consumer and medicine products analysis
NASA Astrophysics Data System (ADS)
Korolev, Timofey K.; Peretyagin, Vladimir S.
2017-06-01
One of the results of intensive development of led technology was the creation of a multi-component, managed devices and illumination/irradiation used in various fields of production (e.g., food industry analysis, food quality). The use of LEDs has become possible due to their structure determining spatial, energy, electrical, thermal and other characteristics. However, the development of the devices for illumination/irradiation require closer attention in the case if you want to provide precise illumination to the area of analysis, located at a specified distance from the radiation source. The present work is devoted to the development and modelling of a specialized source of radiation intended for solving problems of analysis of food products, medicines and water for suitability in drinking. In this work, we provided a mathematical model of spatial and spectral distribution of irridation from the source of infrared radiation ring structure. When you create this kind of source, you address factors such spectral component, the power settings, the spatial and energy components of the diodes.
Intermediate Fidelity Closed Brayton Cycle Power Conversion Model
NASA Technical Reports Server (NTRS)
Lavelle, Thomas M.; Khandelwal, Suresh; Owen, Albert K.
2006-01-01
This paper describes the implementation of an intermediate fidelity model of a closed Brayton Cycle power conversion system (Closed Cycle System Simulation). The simulation is developed within the Numerical Propulsion Simulation System architecture using component elements from earlier models. Of particular interest, and power, is the ability of this new simulation system to initiate a more detailed analysis of compressor and turbine components automatically and to incorporate the overall results into the general system simulation.
Distress modeling for DARWin-ME : final report.
DOT National Transportation Integrated Search
2013-12-01
Distress prediction models, or transfer functions, are key components of the Pavement M-E Design and relevant analysis. The accuracy of such models depends on a successful process of calibration and subsequent validation of model coefficients in the ...
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Park, Gi-Pyo
2014-08-01
This study examined the latent constructs of the Foreign Language Classroom Anxiety Scale (FLCAS) using two different groups of Korean English as a foreign language (EFL) university students. Maximum likelihood exploratory factor analysis with direct oblimin rotation was performed among the first group of 217 participants and produced two meaningful latent components in the FLCAS. The two components of the FLCAS were closely examined among the second group of 244 participants to find the extent to which the two components of the FLCAS fit the data. The model fit indexes showed that the two-factor model in general adequately fit the data. Findings of this study were discussed with the focus on the two components of the FLCAS, followed by future study areas to be undertaken to shed further light on the role of foreign language anxiety in L2 acquisition.
Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1997-01-01
The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.
Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.
1998-01-01
The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.
Unsupervised Neural Network Quantifies the Cost of Visual Information Processing.
Orbán, Levente L; Chartier, Sylvain
2015-01-01
Untrained, "flower-naïve" bumblebees display behavioural preferences when presented with visual properties such as colour, symmetry, spatial frequency and others. Two unsupervised neural networks were implemented to understand the extent to which these models capture elements of bumblebees' unlearned visual preferences towards flower-like visual properties. The computational models, which are variants of Independent Component Analysis and Feature-Extracting Bidirectional Associative Memory, use images of test-patterns that are identical to ones used in behavioural studies. Each model works by decomposing images of floral patterns into meaningful underlying factors. We reconstruct the original floral image using the components and compare the quality of the reconstructed image to the original image. Independent Component Analysis matches behavioural results substantially better across several visual properties. These results are interpreted to support a hypothesis that the temporal and energetic costs of information processing by pollinators served as a selective pressure on floral displays: flowers adapted to pollinators' cognitive constraints.
Quantitative interpretations of Visible-NIR reflectance spectra of blood.
Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H
2008-10-27
This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.
Development of an Aeroelastic Modeling Capability for Transient Nozzle Side Load Analysis
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Zhao, Xiang; Zhang, Sijun; Chen, Yen-Sen
2013-01-01
Lateral nozzle forces are known to cause severe structural damage to any new rocket engine in development during test. While three-dimensional, transient, turbulent, chemically reacting computational fluid dynamics methodology has been demonstrated to capture major side load physics with rigid nozzles, hot-fire tests often show nozzle structure deformation during major side load events, leading to structural damages if structural strengthening measures were not taken. The modeling picture is incomplete without the capability to address the two-way responses between the structure and fluid. The objective of this study is to develop a coupled aeroelastic modeling capability by implementing the necessary structural dynamics component into an anchored computational fluid dynamics methodology. The computational fluid dynamics component is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, while the computational structural dynamics component is developed in the framework of modal analysis. Transient aeroelastic nozzle startup analyses of the Block I Space Shuttle Main Engine at sea level were performed. The computed results from the aeroelastic nozzle modeling are presented.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Coupled structural/thermal/electromagnetic analysis/tailoring of graded composite structures
NASA Technical Reports Server (NTRS)
Hartle, M. S.; Mcknight, R. L.; Huang, H.; Holt, R.
1992-01-01
Described here are the accomplishments of a 5-year program to develop a methodology for coupled structural, thermal, electromagnetic analysis tailoring of graded component structures. The capabilities developed over the course of the program are the analyzer module and the tailoring module for the modeling of graded materials. Highlighted accomplishments for the past year include the addition of a buckling analysis capability, the addition of mode shape slope calculation for flutter analysis, verification of the analysis modules using simulated components, and verification of the tailoring module.
ERIC Educational Resources Information Center
Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad
2017-01-01
This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…
A distributed finite-element modeling and control approach for large flexible structures
NASA Technical Reports Server (NTRS)
Young, K. D.
1989-01-01
An unconventional framework is described for the design of decentralized controllers for large flexible structures. In contrast to conventional control system design practice which begins with a model of the open loop plant, the controlled plant is assembled from controlled components in which the modeling phase and the control design phase are integrated at the component level. The developed framework is called controlled component synthesis (CCS) to reflect that it is motivated by the well developed Component Mode Synthesis (CMS) methods which were demonstrated to be effective for solving large complex structural analysis problems for almost three decades. The design philosophy behind CCS is also closely related to that of the subsystem decomposition approach in decentralized control.
NASA Technical Reports Server (NTRS)
Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.
1990-01-01
An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
Overview of NASA GRC Electrified Aircraft Propulsion Systems Analysis Methods
NASA Technical Reports Server (NTRS)
Schnulo, Sydney
2017-01-01
The accurate modeling and analysis of electrified aircraft propulsion concepts require intricate subsystem system component coupling. The major challenge in electrified aircraft propulsion concept modeling lies in understanding how the subsystems "talk" to each other and the dependencies they have on one another.
Fernee, Christianne; Browne, Martin; Zakrzewski, Sonia
2017-01-01
This paper introduces statistical shape modelling (SSM) for use in osteoarchaeology research. SSM is a full field, multi-material analytical technique, and is presented as a supplementary geometric morphometric (GM) tool. Lower mandibular canines from two archaeological populations and one modern population were sampled, digitised using micro-CT, aligned, registered to a baseline and statistically modelled using principal component analysis (PCA). Sample material properties were incorporated as a binary enamel/dentin parameter. Results were assessed qualitatively and quantitatively using anatomical landmarks. Finally, the technique’s application was demonstrated for inter-sample comparison through analysis of the principal component (PC) weights. It was found that SSM could provide high detail qualitative and quantitative insight with respect to archaeological inter- and intra-sample variability. This technique has value for archaeological, biomechanical and forensic applications including identification, finite element analysis (FEA) and reconstruction from partial datasets. PMID:29216199
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
Bearing-Load Modeling and Analysis Study for Mechanically Connected Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.
2006-01-01
Bearing-load response for a pin-loaded hole is studied within the context of two-dimensional finite element analyses. Pin-loaded-hole configurations are representative of mechanically connected structures, such as a stiffener fastened to a rib of an isogrid panel, that are idealized as part of a larger structural component. Within this context, the larger structural component may be idealized as a two-dimensional shell finite element model to identify load paths and high stress regions. Finite element modeling and analysis aspects of a pin-loaded hole are considered in the present paper including the use of linear and nonlinear springs to simulate the pin-bearing contact condition. Simulating pin-connected structures within a two-dimensional finite element analysis model using nonlinear spring or gap elements provides an effective way for accurate prediction of the local effective stress state and peak forces.
Han, Lide; Yang, Jian; Zhu, Jun
2007-06-01
A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.
Planning Models for Tuberculosis Control Programs
Chorba, Ronald W.; Sanders, J. L.
1971-01-01
A discrete-state, discrete-time simulation model of tuberculosis is presented, with submodels of preventive interventions. The model allows prediction of the prevalence of the disease over the simulation period. Preventive and control programs and their optimal budgets may be planned by using the model for cost-benefit analysis: costs are assigned to the program components and disease outcomes to determine the ratio of program expenditures to future savings on medical and socioeconomic costs of tuberculosis. Optimization is achieved by allocating funds in successive increments to alternative program components in simulation and identifying those components that lead to the greatest reduction in prevalence for the given level of expenditure. The method is applied to four hypothetical disease prevalence situations. PMID:4999448
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
Sources of hydrocarbons in urban road dust: Identification, quantification and prediction.
Mummullage, Sandya; Egodawatta, Prasanna; Ayoko, Godwin A; Goonetilleke, Ashantha
2016-09-01
Among urban stormwater pollutants, hydrocarbons are a significant environmental concern due to their toxicity and relatively stable chemical structure. This study focused on the identification of hydrocarbon contributing sources to urban road dust and approaches for the quantification of pollutant loads to enhance the design of source control measures. The study confirmed the validity of the use of mathematical techniques of principal component analysis (PCA) and hierarchical cluster analysis (HCA) for source identification and principal component analysis/absolute principal component scores (PCA/APCS) receptor model for pollutant load quantification. Study outcomes identified non-combusted lubrication oils, non-combusted diesel fuels and tyre and asphalt wear as the three most critical urban hydrocarbon sources. The site specific variabilities of contributions from sources were replicated using three mathematical models. The models employed predictor variables of daily traffic volume (DTV), road surface texture depth (TD), slope of the road section (SLP), effective population (EPOP) and effective impervious fraction (EIF), which can be considered as the five governing parameters of pollutant generation, deposition and redistribution. Models were developed such that they can be applicable in determining hydrocarbon contributions from urban sites enabling effective design of source control measures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhu, Long-Ji; Zhao, Yue; Chen, Yan-Ni; Cui, Hong-Yang; Wei, Yu-Quan; Liu, Hai-Long; Chen, Xiao-Meng; Wei, Zi-Min
2018-01-01
Atrazine is widely used in agriculture. In this study, dissolved organic matter (DOM) from soils under four types of land use (forest (F), meadow (M), cropland (C) and wetland (W)) was used to investigate the binding characteristics of atrazine. Fluorescence excitation-emission matrix-parallel factor (EEM-PARAFAC) analysis, two-dimensional correlation spectroscopy (2D-COS) and Stern-Volmer model were combined to explore the complexation between DOM and atrazine. The EEM-PARAFAC indicated that DOM from different sources had different structures, and humic-like components had more obvious quenching effects than protein-like components. The Stern-Volmer model combined with correlation analysis showed that log K values of PARAFAC components had a significant correlation with the humification of DOM, especially for C3 component, and they were all in the same order as follows: meadow soil (5.68)>wetland soil (5.44)>cropland soil (5.35)>forest soil (5.04). The 2D-COS further confirmed that humic-like components firstly combined with atrazine followed by protein-like components. These findings suggest that DOM components can significantly influence the bioavailability, mobility and migration of atrazine in different land uses. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.
2016-04-01
We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.
NASA Astrophysics Data System (ADS)
Fatichi, S.; Burlando, P.; Anagnostopoulos, G.
2014-12-01
Sub-surface hydrology has a dominant role on the initiation of rainfall-induced landslides, since changes in the soil water potential affect soil shear strength and thus apparent cohesion. Especially on steep slopes and shallow soils, loss of shear strength can lead to failure even in unsaturated conditions. A process based model, HYDROlisthisis, characterized by high resolution in space and, time is developed to investigate the interactions between surface and subsurface hydrology and shallow landslide initiation. Specifically, 3D variably saturated flow conditions, including soil hydraulic hysteresis and preferential flow, are simulated for the subsurface flow, coupled with a surface runoff routine. Evapotranspiration and specific root water uptake are taken into account for continuous simulations of soil water content during storm and inter-storm periods. The geotechnical component of the model is based on a multidimensional limit equilibrium analysis, which takes into account the basic principles of unsaturated soil mechanics. The model is applied to a small catchment in Switzerland historically prone to rainfall-triggered landslides. A series of numerical simulations were carried out with various boundary conditions (soil depths) and using hydrological and geotechnical components of different complexity. Specifically, the sensitivity to the inclusion of preferential flow and soil hydraulic hysteresis was tested together with the replacement of the infinite slope assumption with a multi-dimensional limit equilibrium analysis. The effect of the different model components on model performance was assessed using accuracy statistics and Receiver Operating Characteristic (ROC) curve. The results show that boundary conditions play a crucial role in the model performance and that the introduced hydrological (preferential flow and soil hydraulic hysteresis) and geotechnical components (multidimensional limit equilibrium analysis) considerably improve predictive capabilities in the presented case study.
Fitzgerald, Michael G.; Karlinger, Michael R.
1983-01-01
Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Application of Steinberg vibration fatigue model for structural verification of space instruments
NASA Astrophysics Data System (ADS)
García, Andrés; Sorribes-Palmer, Félix; Alonso, Gustavo
2018-01-01
Electronic components in spaceships are subjected to vibration loads during the ascent phase of the launcher. It is important to verify by tests and analysis that all parts can survive in the most severe load cases. The purpose of this paper is to present the methodology and results of the application of the Steinberg's fatigue model to estimate the life of electronic components of the EPT-HET instrument for the Solar Orbiter space mission. A Nastran finite element model (FEM) of the EPT-HET instrument was created and used for the structural analysis. The methodology is based on the use of the FEM of the entire instrument to calculate the relative displacement RDSD and RMS values of the PCBs from random vibration analysis. These values are used to estimate the fatigue life of the most susceptible electronic components with the Steinberg's fatigue damage equation and the Miner's cumulative fatigue index. The estimations are calculated for two different configurations of the instrument and three different inputs in order to support the redesign process. Finally, these analytical results are contrasted with the inspections and the functional tests made after the vibration tests, concluding that this methodology can adequately predict the fatigue damage or survival of the electronic components.
A modeling framework for exposing risks in complex systems.
Sharit, J
2000-08-01
This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.
Assessing School Work Culture: A Higher-Order Analysis and Strategy.
ERIC Educational Resources Information Center
Johnson, William L.; Johnson, Annabel M.; Zimmerman, Kurt J.
This paper reviews a work culture productivity model and reports the development of a work culture instrument based on the culture productivity model. Higher order principal components analysis was used to assess work culture, and a third-order factor analysis shows how the first-order factors group into higher-order factors. The school work…
Construct Validation of the Louisiana School Analysis Model (SAM) Instructional Staff Questionnaire
ERIC Educational Resources Information Center
Bray-Clark, Nikki; Bates, Reid
2005-01-01
The purpose of this study was to validate the Louisiana SAM Instructional Staff Questionnaire, a key component of the Louisiana School Analysis Model. The model was designed as a comprehensive evaluation tool for schools. Principle axis factoring with oblique rotation was used to uncover the underlying structure of the SISQ. (Contains 1 table.)
Revealing the underlying drivers of disaster risk: a global analysis
NASA Astrophysics Data System (ADS)
Peduzzi, Pascal
2017-04-01
Disasters events are perfect examples of compound events. Disaster risk lies at the intersection of several independent components such as hazard, exposure and vulnerability. Understanding the weight of each component requires extensive standardisation. Here, I show how footprints of past disastrous events were generated using GIS modelling techniques and used for extracting population and economic exposures based on distribution models. Using past event losses, it was possible to identify and quantify a wide range of socio-politico-economic drivers associated with human vulnerability. The analysis was applied to about nine thousand individual past disastrous events covering earthquakes, floods and tropical cyclones. Using a multiple regression analysis on these individual events it was possible to quantify each risk component and assess how vulnerability is influenced by various hazard intensities. The results show that hazard intensity, exposure, poverty, governance as well as other underlying factors (e.g. remoteness) can explain the magnitude of past disasters. Analysis was also performed to highlight the role of future trends in population and climate change and how this may impacts exposure to tropical cyclones in the future. GIS models combined with statistical multiple regression analysis provided a powerful methodology to identify, quantify and model disaster risk taking into account its various components. The same methodology can be applied to various types of risk at local to global scale. This method was applied and developed for the Global Risk Analysis of the Global Assessment Report on Disaster Risk Reduction (GAR). It was first applied on mortality risk in GAR 2009 and GAR 2011. New models ranging from global assets exposure and global flood hazard models were also recently developed to improve the resolution of the risk analysis and applied through CAPRA software to provide probabilistic economic risk assessments such as Average Annual Losses (AAL) and Probable Maximum Losses (PML) in GAR 2013 and GAR 2015. In parallel similar methodologies were developed to highlitght the role of ecosystems for Climate Change Adaptation (CCA) and Disaster Risk Reduction (DRR). New developments may include slow hazards (such as e.g. soil degradation and droughts), natech hazards (by intersecting with georeferenced critical infrastructures) The various global hazard, exposure and risk models can be visualized and download through the PREVIEW Global Risk Data Platform.
Dynamical modeling and analysis of large cellular regulatory networks
NASA Astrophysics Data System (ADS)
Bérenguier, D.; Chaouiya, C.; Monteiro, P. T.; Naldi, A.; Remy, E.; Thieffry, D.; Tichit, L.
2013-06-01
The dynamical analysis of large biological regulatory networks requires the development of scalable methods for mathematical modeling. Following the approach initially introduced by Thomas, we formalize the interactions between the components of a network in terms of discrete variables, functions, and parameters. Model simulations result in directed graphs, called state transition graphs. We are particularly interested in reachability properties and asymptotic behaviors, which correspond to terminal strongly connected components (or "attractors") in the state transition graph. A well-known problem is the exponential increase of the size of state transition graphs with the number of network components, in particular when using the biologically realistic asynchronous updating assumption. To address this problem, we have developed several complementary methods enabling the analysis of the behavior of large and complex logical models: (i) the definition of transition priority classes to simplify the dynamics; (ii) a model reduction method preserving essential dynamical properties, (iii) a novel algorithm to compact state transition graphs and directly generate compressed representations, emphasizing relevant transient and asymptotic dynamical properties. The power of an approach combining these different methods is demonstrated by applying them to a recent multilevel logical model for the network controlling CD4+ T helper cell response to antigen presentation and to a dozen cytokines. This model accounts for the differentiation of canonical Th1 and Th2 lymphocytes, as well as of inflammatory Th17 and regulatory T cells, along with many hybrid subtypes. All these methods have been implemented into the software GINsim, which enables the definition, the analysis, and the simulation of logical regulatory graphs.
Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard
2002-12-30
Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Tsirkas, S. A.
2018-03-01
The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.
Analysis of free modeling predictions by RBO aleph in CASP11.
Mabrouk, Mahmoud; Werner, Tim; Schneider, Michael; Putz, Ines; Brock, Oliver
2016-09-01
The CASP experiment is a biannual benchmark for assessing protein structure prediction methods. In CASP11, RBO Aleph ranked as one of the top-performing automated servers in the free modeling category. This category consists of targets for which structural templates are not easily retrievable. We analyze the performance of RBO Aleph and show that its success in CASP was a result of its ab initio structure prediction protocol. A detailed analysis of this protocol demonstrates that two components unique to our method greatly contributed to prediction quality: residue-residue contact prediction by EPC-map and contact-guided conformational space search by model-based search (MBS). Interestingly, our analysis also points to a possible fundamental problem in evaluating the performance of protein structure prediction methods: Improvements in components of the method do not necessarily lead to improvements of the entire method. This points to the fact that these components interact in ways that are poorly understood. This problem, if indeed true, represents a significant obstacle to community-wide progress. Proteins 2016; 84(Suppl 1):87-104. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Salvatore, Stefania; Røislien, Jo; Baz-Lomba, Jose A; Bramness, Jørgen G
2017-03-01
Wastewater-based epidemiology is an alternative method for estimating the collective drug use in a community. We applied functional data analysis, a statistical framework developed for analysing curve data, to investigate weekly temporal patterns in wastewater measurements of three prescription drugs with known abuse potential: methadone, oxazepam and methylphenidate, comparing them to positive and negative control drugs. Sewage samples were collected in February 2014 from a wastewater treatment plant in Oslo, Norway. The weekly pattern of each drug was extracted by fitting of generalized additive models, using trigonometric functions to model the cyclic behaviour. From the weekly component, the main temporal features were then extracted using functional principal component analysis. Results are presented through the functional principal components (FPCs) and corresponding FPC scores. Clinically, the most important weekly feature of the wastewater-based epidemiology data was the second FPC, representing the difference between average midweek level and a peak during the weekend, representing possible recreational use of a drug in the weekend. Estimated scores on this FPC indicated recreational use of methylphenidate, with a high weekend peak, but not for methadone and oxazepam. The functional principal component analysis uncovered clinically important temporal features of the weekly patterns of the use of prescription drugs detected from wastewater analysis. This may be used as a post-marketing surveillance method to monitor prescription drugs with abuse potential. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
DAKOTA Design Analysis Kit for Optimization and Terascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).
Mangalgiri, Kiranmayi P; Timko, Stephen A; Gonsior, Michael; Blaney, Lee
2017-07-18
Parallel factor analysis (PARAFAC) applied to fluorescence excitation emission matrices (EEMs) allows quantitative assessment of the composition of fluorescent dissolved organic matter (DOM). In this study, we fit a four-component EEM-PARAFAC model to characterize DOM extracted from poultry litter. The data set included fluorescence EEMs from 291 untreated, irradiated (253.7 nm, 310-410 nm), and oxidized (UV-H 2 O 2 , ozone) poultry litter extracts. The four components were identified as microbial humic-, terrestrial humic-, tyrosine-, and tryptophan-like fluorescent signatures. The Tucker's congruence coefficients for components from the global (i.e., aggregated sample set) model and local (i.e., single poultry litter source) models were greater than 0.99, suggesting that the global EEM-PARAFAC model may be suitable to study poultry litter DOM from individual sources. In general, the transformation trends of the four fluorescence components were comparable for all poultry litter sources tested. For irradiation at 253.7 nm, ozonation, and UV-H 2 O 2 advanced oxidation, transformation of the humic-like components was slower than that of the tryptophan-like component. The opposite trend was observed for irradiation at 310-410 nm, due to differences in UV absorbance properties of components. Compared to the other EEM-PARAFAC components, the tyrosine-like component was fairly recalcitrant in irradiation and oxidation processes. This novel application of EEM-PARAFAC modeling provides insight into the composition and fate of agricultural DOM in natural and engineered systems.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Probabilistic evaluation of SSME structural components
NASA Astrophysics Data System (ADS)
Rajagopal, K. R.; Newell, J. F.; Ho, H.
1991-05-01
The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.
NASA Astrophysics Data System (ADS)
Kovács, G.
2009-09-01
Current status of (the lack of) understanding Blazhko effect is reviewed. We focus mostly on the various components of the failure of the models and touch upon the observational issues only at a degree needed for the theoretical background. Attention is to be paid to models based on radial mode resonances, since they seem to be not fully explored yet, especially if we consider possible non-standard effects (e.g., heavy element enhancement). To aid further modeling efforts, we stress the need for accurate time-series spectral line analysis to reveal any possible non-radial component(s) and thereby let to include (or exclude) non-radial modes in explaining the Blazhko phenomenon.
Thermal Modeling of the Mars Reconnaissance Orbiter's Solar Panel and Instruments during Aerobraking
NASA Technical Reports Server (NTRS)
Dec, John A.; Gasbarre, Joseph F.; Amundsen, Ruth M.
2007-01-01
The Mars Reconnaissance Orbiter (MRO) launched on August 12, 2005 and started aerobraking at Mars in March 2006. During the spacecraft s design phase, thermal models of the solar panels and instruments were developed to determine which components would be the most limiting thermally during aerobraking. Having determined the most limiting components, thermal limits in terms of heat rate were established. Advanced thermal modeling techniques were developed utilizing Thermal Desktop and Patran Thermal. Heat transfer coefficients were calculated using a Direct Simulation Monte Carlo technique. Analysis established that the solar panels were the most limiting components during the aerobraking phase of the mission.
Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C
2017-01-01
Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305
2014-01-01
Background The occurrence of response shift (RS) in longitudinal health-related quality of life (HRQoL) studies, reflecting patient adaptation to disease, has already been demonstrated. Several methods have been developed to detect the three different types of response shift (RS), i.e. recalibration RS, 2) reprioritization RS, and 3) reconceptualization RS. We investigated two complementary methods that characterize the occurrence of RS: factor analysis, comprising Principal Component Analysis (PCA) and Multiple Correspondence Analysis (MCA), and a method of Item Response Theory (IRT). Methods Breast cancer patients (n = 381) completed the EORTC QLQ-C30 and EORTC QLQ-BR23 questionnaires at baseline, immediately following surgery, and three and six months after surgery, according to the “then-test/post-test” design. Recalibration was explored using MCA and a model of IRT, called the Linear Logistic Model with Relaxed Assumptions (LLRA) using the then-test method. Principal Component Analysis (PCA) was used to explore reconceptualization and reprioritization. Results MCA highlighted the main profiles of recalibration: patients with high HRQoL level report a slightly worse HRQoL level retrospectively and vice versa. The LLRA model indicated a downward or upward recalibration for each dimension. At six months, the recalibration effect was statistically significant for 11/22 dimensions of the QLQ-C30 and BR23 according to the LLRA model (p ≤ 0.001). Regarding the QLQ-C30, PCA indicated a reprioritization of symptom scales and reconceptualization via an increased correlation between functional scales. Conclusions Our findings demonstrate the usefulness of these analyses in characterizing the occurrence of RS. MCA and IRT model had convergent results with then-test method to characterize recalibration component of RS. PCA is an indirect method in investigating the reprioritization and reconceptualization components of RS. PMID:24606836
Quantification of frequency-components contributions to the discharge of a karst spring
NASA Astrophysics Data System (ADS)
Taver, V.; Johannet, A.; Vinches, M.; Borrell, V.; Pistre, S.; Bertin, D.
2013-12-01
Karst aquifers represent important underground resources for water supplies, providing it to 25% of the population. Nevertheless such systems are currently underexploited because of their heterogeneity and complexity, which make work fields and physical measurements expensive, and frequently not representative of the whole aquifer. The systemic paradigm appears thus at a complementary approach to study and model karst aquifers in the framework of non-linear system analysis. Its input and output signals, namely rainfalls and discharge contain information about the function performed by the physical process. Therefore, improvement of knowledge about the karst system can be provided using time series analysis, for example Fourier analysis or orthogonal decomposition [1]. Another level of analysis consists in building non-linear models to identify rainfall/discharge relation, component by component [2]. In this context, this communication proposes to use neural networks to first model the rainfall-runoff relation using frequency components, and second to analyze the models, using the KnoX method [3], in order to quantify the importance of each component. Two different neural models were designed: (i) the recurrent model which implements a non-linear recurrent model fed by rainfalls, ETP and previous estimated discharge, (ii) the feed-forward model which implements a non-linear static model fed by rainfalls, ETP and previous observed discharges. The first model is known to better represent the rainfall-runoff relation; the second one to better predict the discharge based on previous discharge observations. KnoX method is based on a variable selection method, which simply considers values of parameters after the training without taking into account the non-linear behavior of the model during functioning. An amelioration of the KnoX method, is thus proposed in order to overcome this inadequacy. The proposed method, leads thus to both a hierarchization and a quantification of the input variables, here the frequency components, over output signal. Applied to the Lez karst aquifer, the combination of frequency decomposition and knowledge extraction improves knowledge on hydrological behavior. Both models and both extraction methods were applied and assessed using a fictitious reference model. Discussion is proposed in order to analyze efficiency of the methods compared to in situ measurements and tracing. [1] D. Labat et al. 'Rainfall-runoff relations for karst springs. Part II: continuous wavelet and discrete orthogonal multiresolution' In J of Hydrology, Vol. 238, 2000, pp. 149-178. [2] A. Johannet et al. 'Prediction of Lez Spring Discharge (Southern France) by Neural Networks using Orthogonal Wavelet Decomposition'.IJCNN Proceedings Brisbane 2012. [3] L. Kong A Siou et al. 'Modélisation hydrodynamique des karsts par réseaux de neurones : Comment dépasser la boîte noire. (Karst hydrodynamic modelling using artificial neural networks: how to surpass the black box ?)'. Proceedings of the 9th conference on limestone hydrogeology,2011 Besançon, France.
Analysis of zenith tropospheric delay in tropical latitudes
NASA Astrophysics Data System (ADS)
Zablotskyj, Fedir; Zablotska, Alexandra
2010-05-01
The paper studies some peculiarities of the nature of zenith tropospheric delay in tropical latitudes. There are shown the values of dry and wet components of zenith tropospheric delay obtained by an integration of the radiosonde data at 9 stations: Guam, Seyshelles, Singapore, Pago Pago, Hilo, Koror, San Cristobal, San Juan and Belem. There were made 350 atmospheric models for the period from 11th to 20th of January, April, July and October 2008 at 0h and 12h UT (Universal Time). The quantities of the dry dd(aer) and wet dw(aer) components of zenith tropospheric delay were determined by means of the integration for each atmospheric model. Then the quantities of the dry dd(SA), dd(HO) and wet dw(SA), dw(HO) components of zenith tropospheric delay (Saastamoinen and Hopfield analytical models) were calculated by the surface values of the pressure P0, temperature t0, relative air humidity U0 on the height H0 and by the geographic latitude φ. It must be point out the following from the analysis of the averaged quantities and differences δdd(SA), δdd(HO), δdw(SA), δdw(HO) between the correspondent components of zenith tropospheric delay obtained by the radiosonde data and by the analytical models: zenith tropospheric delay obtained by the radiosonde data amounts to considerably larger value in the equatorial zone, especially, at the expense of the wet component, in contrast to high and middle latitudes. Thus, the dry component of zenith tropospheric delay is equal at the average 2290 mm and the wet component is 290 mm; by the results of the analysis of Saastamoinen and Hopfield models the dry component differences δdd(SA) and δdd(HO) are negative in all cases and average -20 mm. It is not typical neither for high latitudes nor for middle ones; the differences between the values of the wet components obtained from radiosonde data and of Saastamoinen and Hopfield models are positive in general. Therewith the δdw(HO) values are larger than the correspondent δdw(SA) ones on 20 ÷ 30 mm. This is because of that the tropospheric height, founded in the determination of the wet component by Hopfield model, does not correspond the mean real tropospheric height which is typical for the tropical latitudes; there are the considerable differences in the average values of zenith tropospheric delay between the stations of the equatorial zone. By the radiosonde data they can amount to 100 and more millimeters. These differences are caused by different character of the air humidity distribution along a height. Thus, for example, in the lower half of the troposphere the mean partial pressure of the water vapour is about 2 ÷ 2,5 times larger at Singapore station than at Hilo one. The recommendations concerning the modification of Saastamoinen and Hopfield models for the zone of tropical latitudes are given in conclusion of the paper.
Sparse modeling of spatial environmental variables associated with asthma
Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.
2014-01-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.
SensA: web-based sensitivity analysis of SBML models.
Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W
2014-10-01
SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.
Characterization of Strombolian events by using independent component analysis
NASA Astrophysics Data System (ADS)
Ciaramella, A.; de Lauro, E.; de Martino, S.; di Lieto, B.; Falanga, M.; Tagliaferri, R.
2004-10-01
We apply Independent Component Analysis (ICA) to seismic signals recorded at Stromboli volcano. Firstly, we show how ICA works considering synthetic signals, which are generated by dynamical systems. We prove that Strombolian signals, both tremor and explosions, in the high frequency band (>0.5 Hz), are similar in time domain. This seems to give some insights to the organ pipe model generation for the source of these events. Moreover, we are able to recognize in the tremor signals a low frequency component (<0.5 Hz), with a well defined peak corresponding to 30s.
Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis
ERIC Educational Resources Information Center
Ansari, Asim; Iyengar, Raghuram
2006-01-01
We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…
ERIC Educational Resources Information Center
Khlaisang, Jintavee
2010-01-01
The purpose of this study was to investigate proper website and courseware for e-learning in higher education. Methods used in this study included the data collection, the analysis surveys, the experts' in-depth interview, and the experts' focus group. Results indicated that there were 16 components for website, as well as 16 components for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
Systems Analysis Initiated for All-Electric Aircraft Propulsion
NASA Technical Reports Server (NTRS)
Kohout, Lisa L.
2003-01-01
A multidisciplinary effort is underway at the NASA Glenn Research Center to develop concepts for revolutionary, nontraditional fuel cell power and propulsion systems for aircraft applications. There is a growing interest in the use of fuel cells as a power source for electric propulsion as well as an auxiliary power unit to substantially reduce or eliminate environmentally harmful emissions. A systems analysis effort was initiated to assess potential concepts in an effort to identify those configurations with the highest payoff potential. Among the technologies under consideration are advanced proton exchange membrane (PEM) and solid oxide fuel cells, alternative fuels and fuel processing, and fuel storage. Prior to this effort, the majority of fuel cell analysis done at Glenn was done for space applications. Because of this, a new suite of models was developed. These models include the hydrogen-air PEM fuel cell; internal reforming solid oxide fuel cell; balance-of-plant components (compressor, humidifier, separator, and heat exchangers); compressed gas, cryogenic, and liquid fuel storage tanks; and gas turbine/generator models for hybrid system applications. Initial mass, volume, and performance estimates of a variety of PEM systems operating on hydrogen and reformate have been completed for a baseline general aviation aircraft. Solid oxide/turbine hybrid systems are being analyzed. In conjunction with the analysis efforts, a joint effort has been initiated with Glenn s Computer Services Division to integrate fuel cell stack and component models with the visualization environment that supports the GRUVE lab, Glenn s virtual reality facility. The objective of this work is to provide an environment to assist engineers in the integration of fuel cell propulsion systems into aircraft and provide a better understanding of the interaction between system components and the resulting effect on the overall design and performance of the aircraft. Initially, three-dimensional computer-aided design (CAD) models of representative PEM fuel cell stack and components were developed and integrated into the virtual reality environment along with an Excel-based model used to calculate fuel cell electrical performance on the basis of cell dimensions (see the figure). CAD models of a representative general aviation aircraft were also developed and added to the environment. With the use of special headgear, users will be able to virtually manipulate the fuel cell s physical characteristics and its placement within the aircraft while receiving information on the resultant fuel cell output power and performance. As the systems analysis effort progresses, we will add more component models to the GRUVE environment to help us more fully understand the effect of various system configurations on the aircraft.
NASA Astrophysics Data System (ADS)
Sasmita, Yoga; Darmawan, Gumgum
2017-08-01
This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.
Structural analysis of gluten-free doughs by fractional rheological model
NASA Astrophysics Data System (ADS)
Orczykowska, Magdalena; Dziubiński, Marek; Owczarz, Piotr
2015-02-01
This study examines the effects of various components of tested gluten-free doughs, such as corn starch, amaranth flour, pea protein isolate, and cellulose in the form of plantain fibers on rheological properties of such doughs. The rheological properties of gluten-free doughs were assessed by using the rheological fractional standard linear solid model (FSLSM). Parameter analysis of the Maxwell-Wiechert fractional derivative rheological model allows to state that gluten-free doughs present a typical behavior of viscoelastic quasi-solid bodies. We obtained the contribution dependence of each component used in preparations of gluten-free doughs (either hard-gel or soft-gel structure). The complicate analysis of the mechanical structure of gluten-free dough was done by applying the FSLSM to explain quite precisely the effects of individual ingredients of the dough on its rheological properties.
Training Plan. Central Archive for Reusable Defense Software (CARDS)
1994-01-29
Modeling Software Reuse Technology: Feature Oriented Domain Analysis ( FODA ). SEI, Carnegie Mellon University, May 1992. 8. Component Provider’s...events to the services of the domain. 4. Feature Oriented Domain Analysis ( FODA ) [COHEN92] The FODA method produces feature models. Feature models provide...Architecture FODA Feature-Oriented Domain Analysis GOTS Government-Off-The-Shelf Pap A-49 STARS-VC-B003/001/00 29 imaty 1994 MS Master of Science NEC
Feng, Ssj; Sechopoulos, I
2012-06-01
To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.
Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.
ERIC Educational Resources Information Center
Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki
2000-01-01
Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)
NWTC Helps Guide U.S. Offshore R&D; NREL (National Renewable Energy Laboratory)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-07-01
The National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL) is helping guide our nation's research-and-development effort in offshore renewable energy, which includes: Design, modeling, and analysis tools; Device and component testing; Resource characterization; Economic modeling and analysis; Grid integration.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
Watershed Modeling Recommendation Report for Lake Champlain TMDL
This report describes the recommended modeling approach for watershed modeling component of the Lake Champlain TMDL project. The report was prepared by Tetra Tech, with input from the Lake Champlain watershed analysis workgroup. (TetraTech, 2012a)
NASA Astrophysics Data System (ADS)
Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang
2014-05-01
Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.
Jacobson, Robert B.; Parsley, Michael J.; Annis, Mandy L.; Colvin, Michael E.; Welker, Timothy L.; James, Daniel A.
2015-01-01
This report documents the process of developing and refining conceptual ecological models (CEMs) for linking river management to pallid sturgeon (Scaphirhynchus albus) population dynamics in the Missouri River. The refined CEMs are being used in the Missouri River Pallid Sturgeon Effects Analysis to organize, document, and formalize an understanding of pallid sturgeon population responses to past and future management alternatives. The general form of the CEMs, represented by a population-level model and component life-stage models, was determined in workshops held in the summer of 2013. Subsequently, the Missouri River Pallid Sturgeon Effects Analysis team designed a general hierarchical structure for the component models, refined the graphical structure, and reconciled variation among the components and between models developed for the upper river (Upper Missouri & Yellowstone Rivers) and the lower river (Missouri River downstream from Gavins Point Dam). Importance scores attributed to the relations between primary biotic characteristics and survival were used to define a candidate set of working dominant hypotheses about pallid sturgeon population dynamics. These CEMs are intended to guide research and adaptive-management actions to benefit pallid sturgeon populations in the Missouri River.
NASA Astrophysics Data System (ADS)
Raffray, A. René; Federici, Gianfranco
1997-04-01
RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.
Preliminary study of soil permeability properties using principal component analysis
NASA Astrophysics Data System (ADS)
Yulianti, M.; Sudriani, Y.; Rustini, H. A.
2018-02-01
Soil permeability measurement is undoubtedly important in carrying out soil-water research such as rainfall-runoff modelling, irrigation water distribution systems, etc. It is also known that acquiring reliable soil permeability data is rather laborious, time-consuming, and costly. Therefore, it is desirable to develop the prediction model. Several studies of empirical equations for predicting permeability have been undertaken by many researchers. These studies derived the models from areas which soil characteristics are different from Indonesian soil, which suggest a possibility that these permeability models are site-specific. The purpose of this study is to identify which soil parameters correspond strongly to soil permeability and propose a preliminary model for permeability prediction. Principal component analysis (PCA) was applied to 16 parameters analysed from 37 sites consist of 91 samples obtained from Batanghari Watershed. Findings indicated five variables that have strong correlation with soil permeability, and we recommend a preliminary permeability model, which is potential for further development.
Wagner, J A; Schnoll, R A; Gipson, M T
1998-07-01
Adherence to self-monitoring of blood glucose (SMBG) is problematic for many people with diabetes. Self-reports of adherence have been found to be unreliable, and existing paper-and-pencil measures have limitations. This study developed a brief measure of SMBG adherence with good psychometric properties and a useful factor structure that can be used in research and in practice. A total of 216 adults with diabetes responded to 30 items rated on a 9-point Likert scale that asked about blood monitoring habits. In part I of the study, items were evaluated and retained based on their psychometric properties. The sample was divided into exploratory and confirmatory halves. Using the exploratory half, items with acceptable psychometric properties were subjected to a principal components analysis. In part II of the study, structural equation modeling was used to confirm the component solution with the entire sample. Structural modeling was also used to test the relationship between these components. It was hypothesized that the scale would produce four correlated factors. Principal components analysis suggested a two-component solution, and confirmatory factor analysis confirmed this solution. The first factor measures the degree to which patients rely on others to help them test and thus was named "social influence." The second component measures the degree to which patients use physical symptoms of blood glucose levels to help them test and thus was named "physical influence." Results of the structural model show that the components are correlated and make up the higher-order latent variable adherence. The resulting 15-item scale provides a short, reliable way to assess patient adherence to SMBG. Despite the existence of several aspects of adherence, this study indicates that the construct consists of only two components. This scale is an improvement on previous measures of adherence because of its good psychometric properties, its interpretable factor structure, and its rigorous empirical development.
T-MATS Toolbox for the Modeling and Analysis of Thermodynamic Systems
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.
2014-01-01
The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a MATLABSimulink (The MathWorks Inc.) plug-in for creating and simulating thermodynamic systems and controls. The package contains generic parameterized components that can be combined with a variable input iterative solver and optimization algorithm to create complex system models, such as gas turbines.
DRAINMOD-GIS: a lumped parameter watershed scale drainage and water quality model
G.P. Fernandez; G.M. Chescheir; R.W. Skaggs; D.M. Amatya
2006-01-01
A watershed scale lumped parameter hydrology and water quality model that includes an uncertainty analysis component was developed and tested on a lower coastal plain watershed in North Carolina. Uncertainty analysis was used to determine the impacts of uncertainty in field and network parameters of the model on the predicted outflows and nitrate-nitrogen loads at the...
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, E.R.
1983-09-01
The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe
2016-01-01
Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
High Fidelity System Simulation of Multiple Components in Support of the UEET Program
NASA Technical Reports Server (NTRS)
Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton
2006-01-01
The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.
Overview of SDCM - The Spacecraft Design and Cost Model
NASA Technical Reports Server (NTRS)
Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.
1988-01-01
The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.
Calculation and Analysis of Magnetic Gradient Tensor Components of Global Magnetic Models
NASA Astrophysics Data System (ADS)
Schiffler, M.; Queitsch, M.; Schneider, M.; Goepel, A.; Stolz, R.; Krech, W.; Meyer, H. G.; Kukowski, N.
2014-12-01
Global Earth's magnetic field models like the International Geomagnetic Reference Field (IGRF), the World Magnetic Model (WMM) or the High Definition Geomagnetic Model (HDGM) are harmonic analysis regressions to available magnetic observations stored as spherical harmonic coefficients. Input data combine recordings from magnetic observatories, airborne magnetic surveys and satellite data. The advance of recent magnetic satellite missions like SWARM and its predecessors like CHAMP offer high resolution measurements while providing a full global coverage. This deserves expansion of the theoretical framework of harmonic synthesis to magnetic gradient tensor components. Measurement setups for Full Tensor Magnetic Gradiometry equipped with high sensitive gradiometers like the JeSSY STAR system can directly measure the gradient tensor components, which requires precise knowledge about the background regional gradients which can be calculated with this extension. In this study we develop the theoretical framework for calculation of the magnetic gradient tensor components from the harmonic series expansion and apply our approach to the IGRF and HDGM. The gradient tensor component maps for entire Earth's surface produced for the IGRF show low gradients reflecting the variation from the dipolar character, whereas maps for the HDGM (up to degree N=729) reveal new information about crustal structure, especially across the oceans, and deeply situated ore bodies. From the gradient tensor components, the rotational invariants, the Eigenvalues, and the normalized source strength (NSS) are calculated. The NSS focuses on shallower and stronger anomalies. Euler deconvolution using either the tensor components or the NSS applied to the HDGM reveals an estimate of the average source depth for the entire magnetic crust as well as individual plutons and ore bodies. The NSS reveals the boundaries between the anomalies of major continental provinces like southern Africa or the Eastern European Craton.
Finite Element Model Development For Aircraft Fuselage Structures
NASA Technical Reports Server (NTRS)
Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.
2000-01-01
The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results.
A guide to understanding meta-analysis.
Israel, Heidi; Richter, Randy R
2011-07-01
With the focus on evidence-based practice in healthcare, a well-conducted systematic review that includes a meta-analysis where indicated represents a high level of evidence for treatment effectiveness. The purpose of this commentary is to assist clinicians in understanding meta-analysis as a statistical tool using both published articles and explanations of components of the technique. We describe what meta-analysis is, what heterogeneity is, and how it affects meta-analysis, effect size, the modeling techniques of meta-analysis, and strengths and weaknesses of meta-analysis. Common components like forest plot interpretation, software that may be used, special cases for meta-analysis, such as subgroup analysis, individual patient data, and meta-regression, and a discussion of criticisms, are included.
A Study on Components of Internal Control-Based Administrative System in Secondary Schools
ERIC Educational Resources Information Center
Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda
2015-01-01
The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…
Shape optimization of three-dimensional stamped and solid automotive components
NASA Technical Reports Server (NTRS)
Botkin, M. E.; Yang, R.-J.; Bennett, J. A.
1987-01-01
The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.
A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample
Chang, Yu-Yen; Chao, Rikon; Wang, Wei-Hao; ...
2012-01-01
Disney emore » t al. (2008) have found a striking correlation among global parameters of H i -selected galaxies and concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L -band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color ( g - i ) is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color ( i - J ) shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.« less
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung
2018-01-01
We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.
Impeller leakage flow modeling for mechanical vibration control
NASA Technical Reports Server (NTRS)
Palazzolo, Alan B.
1996-01-01
HPOTP and HPFTP vibration test results have exhibited transient and steady characteristics which may be due to impeller leakage path (ILP) related forces. For example, an axial shift in the rotor could suddenly change the ILP clearances and lengths yielding dynamic coefficient and subsequent vibration changes. ILP models are more complicated than conventional-single component-annular seal models due to their radial flow component (coriolis and centrifugal acceleration), complex geometry (axial/radial clearance coupling), internal boundary (transition) flow conditions between mechanical components along the ILP and longer length, requiring moment as well as force coefficients. Flow coupling between mechanical components results from mass and energy conservation applied at their interfaces. Typical components along the ILP include an inlet seal, curved shroud, and an exit seal, which may be a stepped labyrinth type. Von Pragenau (MSFC) has modeled labyrinth seals as a series of plain annular seals for leakage and dynamic coefficient prediction. These multi-tooth components increase the total number of 'flow coupled' components in the ILP. Childs developed an analysis for an ILP consisting of a single, constant clearance shroud with an exit seal represented by a lumped flow-loss coefficient. This same geometry was later extended to include compressible flow. The objective of the current work is to: supply ILP leakage-force impedance-dynamic coefficient modeling software to MSFC engineers, base on incompressible/compressible bulk flow theory; design the software to model a generic geometry ILP described by a series of components lying along an arbitrarily directed path; validate the software by comparison to available test data, CFD and bulk models; and develop a hybrid CFD-bulk flow model of an ILP to improve modeling accuracy within practical run time constraints.
Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.
Siettos, Constantinos; Starke, Jens
2016-09-01
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
2010-05-01
has been an increasing move towards armor systems which are both structural and protection components at the same time. Analysis of material response...the materials can move. As the FE analysis progresses the component will move while the mesh remains motionless (Figure 4). Individual nodes and cells...this parameter. This subroutine needs many inputs, such as the speed of sound in the material , the FE size mesh and the safety factor, which prevents
NASA Astrophysics Data System (ADS)
Smiarowski, Adam; Mulè, Shane
2015-06-01
The AEM in-line component is added to the posterior model covariance matrix analysis done by Christensen and Lawrie, who estimated resolution of data in an inversion program. They compared two AEM systems: SkyTEM and CGG's TEMPEST™. Here, we clarify points made about TEMPEST™ and extend the analysis to include the in-line component.
Overview of the DAEDALOS project
NASA Astrophysics Data System (ADS)
Bisagni, Chiara
2015-10-01
The "Dynamics in Aircraft Engineering Design and Analysis for Light Optimized Structures" (DAEDALOS) project aimed to develop methods and procedures to determine dynamic loads by considering the effects of dynamic buckling, material damping and mechanical hysteresis during aircraft service. Advanced analysis and design principles were assessed with the scope of partly removing the uncertainty and the conservatism of today's design and certification procedures. To reach these objectives a DAEDALOS aircraft model representing a mid-size business jet was developed. Analysis and in-depth investigation of the dynamic response were carried out on full finite element models and on hybrid models. Material damping was experimentally evaluated, and different methods for damping evaluation were developed, implemented in finite element codes and experimentally validated. They include a strain energy method, a quasi-linear viscoelastic material model, and a generalized Maxwell viscous material damping. Panels and shells representative of typical components of the DAEDALOS aircraft model were experimentally tested subjected to static as well as dynamic loads. Composite and metallic components of the aircraft model were investigated to evaluate the benefit in terms of weight saving.
Groundwater flow in the Brunswick/Glynn County area, Georgia, 2000-04
Cherry, Gregory S.
2015-01-01
Analysis of simulated water-budget components for 2000 and 2004 indicate that specified-head boundaries in the Floridan aquifer system to the south and southwest of the regional model area control about 70 percent of inflows and nearly 50 percent of outflows to the model region. Other water-budget components indicate an 80-million-gallon-per-day decrease in pumping from the Floridan aquifer system during this period.
2011-06-17
structure through quantitative assessment of stiffness and modal parameter changes resulting from modifications to the beam geometries and positions...power transmission assembly. If the power limit at a wheel exceeds the traction limit, then depending on the type of differential placed on the axle ...components with appropriate model connectivity instead to determine the free modal response of powertrain type components, without abstraction
Classification of breast tissue in mammograms using efficient coding.
Costa, Daniel D; Campos, Lúcio F; Barros, Allan K
2011-06-24
Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.
Raman active components of skin cancer.
Feng, Xu; Moy, Austin J; Nguyen, Hieu T M; Zhang, Jason; Fox, Matthew C; Sebastian, Katherine R; Reichenberg, Jason S; Markey, Mia K; Tunnell, James W
2017-06-01
Raman spectroscopy (RS) has shown great potential in noninvasive cancer screening. Statistically based algorithms, such as principal component analysis, are commonly employed to provide tissue classification; however, they are difficult to relate to the chemical and morphological basis of the spectroscopic features and underlying disease. As a result, we propose the first Raman biophysical model applied to in vivo skin cancer screening data. We expand upon previous models by utilizing in situ skin constituents as the building blocks, and validate the model using previous clinical screening data collected from a Raman optical fiber probe. We built an 830nm confocal Raman microscope integrated with a confocal laser-scanning microscope. Raman imaging was performed on skin sections spanning various disease states, and multivariate curve resolution (MCR) analysis was used to resolve the Raman spectra of individual in situ skin constituents. The basis spectra of the most relevant skin constituents were combined linearly to fit in vivo human skin spectra. Our results suggest collagen, elastin, keratin, cell nucleus, triolein, ceramide, melanin and water are the most important model components. We make available for download (see supplemental information) a database of Raman spectra for these eight components for others to use as a reference. Our model reveals the biochemical and structural makeup of normal, nonmelanoma and melanoma skin cancers, and precancers and paves the way for future development of this approach to noninvasive skin cancer diagnosis.
Raman active components of skin cancer
Feng, Xu; Moy, Austin J; Nguyen, Hieu T. M.; Zhang, Jason; Fox, Matthew C.; Sebastian, Katherine R.; Reichenberg, Jason S.; Markey, Mia K.; Tunnell, James W.
2017-01-01
Raman spectroscopy (RS) has shown great potential in noninvasive cancer screening. Statistically based algorithms, such as principal component analysis, are commonly employed to provide tissue classification; however, they are difficult to relate to the chemical and morphological basis of the spectroscopic features and underlying disease. As a result, we propose the first Raman biophysical model applied to in vivo skin cancer screening data. We expand upon previous models by utilizing in situ skin constituents as the building blocks, and validate the model using previous clinical screening data collected from a Raman optical fiber probe. We built an 830nm confocal Raman microscope integrated with a confocal laser-scanning microscope. Raman imaging was performed on skin sections spanning various disease states, and multivariate curve resolution (MCR) analysis was used to resolve the Raman spectra of individual in situ skin constituents. The basis spectra of the most relevant skin constituents were combined linearly to fit in vivo human skin spectra. Our results suggest collagen, elastin, keratin, cell nucleus, triolein, ceramide, melanin and water are the most important model components. We make available for download (see supplemental information) a database of Raman spectra for these eight components for others to use as a reference. Our model reveals the biochemical and structural makeup of normal, nonmelanoma and melanoma skin cancers, and precancers and paves the way for future development of this approach to noninvasive skin cancer diagnosis. PMID:28663910
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Global Qualitative Flow-Path Modeling for Local State Determination in Simulation and Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Fleming, Land D. (Inventor)
1998-01-01
For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components.
A model for the progressive failure of laminated composite structural components
NASA Technical Reports Server (NTRS)
Allen, D. H.; Lo, D. C.
1991-01-01
Laminated continuous fiber polymeric composites are capable of sustaining substantial load induced microstructural damage prior to component failure. Because this damage eventually leads to catastrophic failure, it is essential to capture the mechanics of progressive damage in any cogent life prediction model. For the past several years the authors have been developing one solution approach to this problem. In this approach the mechanics of matrix cracking and delamination are accounted for via locally averaged internal variables which account for the kinematics of microcracking. Damage progression is predicted by using phenomenologically based damage evolution laws which depend on the load history. The result is a nonlinear and path dependent constitutive model which has previously been implemented to a finite element computer code for analysis of structural components. Using an appropriate failure model, this algorithm can be used to predict component life. In this paper the model will be utilized to demonstrate the ability to predict the load path dependence of the damage and stresses in plates subjected to fatigue loading.
Separation mechanism of nortriptyline and amytriptyline in RPLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritti, Fabrice; Guiochon, Georges A
2005-08-01
The single and the competitive equilibrium isotherms of nortriptyline and amytriptyline were acquired by frontal analysis (FA) on the C{sub 18}-bonded discovery column, using a 28/72 (v/v) mixture of acetonitrile and water buffered with phosphate (20 mM, pH 2.70). The adsorption energy distributions (AED) of each compound were calculated from the raw adsorption data. Both the fitting of the adsorption data using multi-linear regression analysis and the AEDs are consistent with a trimodal isotherm model. The single-component isotherm data fit well to the tri-Langmuir isotherm model. The extension to a competitive two-component tri-Langmuir isotherm model based on the best parametersmore » of the single-component isotherms does not account well for the breakthrough curves nor for the overloaded band profiles measured for mixtures of nortriptyline and amytriptyline. However, it was possible to derive adjusted parameters of a competitive tri-Langmuir model based on the fitting of the adsorption data obtained for these mixtures. A very good agreement was then found between the calculated and the experimental overloaded band profiles of all the mixtures injected.« less
Research on distributed heterogeneous data PCA algorithm based on cloud platform
NASA Astrophysics Data System (ADS)
Zhang, Jin; Huang, Gang
2018-05-01
Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.
NASA Astrophysics Data System (ADS)
Rui, Zhenhua
This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.
NASA Astrophysics Data System (ADS)
Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng
2016-11-01
To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.
Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components
NASA Astrophysics Data System (ADS)
Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.
Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Surajit; Chattopadhyay, Goutami
2012-10-01
In the work discussed in this paper we considered total ozone time series over Kolkata (22°34'10.92″N, 88°22'10.92″E), an urban area in eastern India. Using cloud cover, average temperature, and rainfall as the predictors, we developed an artificial neural network, in the form of a multilayer perceptron with sigmoid non-linearity, for prediction of monthly total ozone concentrations from values of the predictors in previous months. We also estimated total ozone from values of the predictors in the same month. Before development of the neural network model we removed multicollinearity by means of principal component analysis. On the basis of the variables extracted by principal component analysis, we developed three artificial neural network models. By rigorous statistical assessment it was found that cloud cover and rainfall can act as good predictors for monthly total ozone when they are considered as the set of input variables for the neural network model constructed in the form of a multilayer perceptron. In general, the artificial neural network has good potential for predicting and estimating monthly total ozone on the basis of the meteorological predictors. It was further observed that during pre-monsoon and winter seasons, the proposed models perform better than during and after the monsoon.
NASA Astrophysics Data System (ADS)
Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei
2014-08-01
The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.
Cognitive Task Analysis of Prioritization in Air Traffic Control.
ERIC Educational Resources Information Center
Redding, Richard E.; And Others
A cognitive task analysis was performed to analyze the key cognitive components of the en route air traffic controllers' jobs. The goals were to ascertain expert mental models and decision-making strategies and to identify important differences in controller knowledge, skills, and mental models as a function of expertise. Four groups of…
The Integration of Psycholinguistic and Discourse Processing Theories of Reading Comprehension.
ERIC Educational Resources Information Center
Beebe, Mona J.
To assess the compatibility of miscue analysis and recall analysis as independent elements in a theory of reading comprehension, a study was performed that operationalized each theory and separated its components into measurable units to allow empirical testing. A cueing strategy model was estimated, but the discourse processing model was broken…
New Representation of Bearings in LS-DYNA
NASA Technical Reports Server (NTRS)
Carney, Kelly S.; Howard, Samuel A.; Miller, Brad A.; Benson, David J.
2014-01-01
Non-linear, dynamic, finite element analysis is used in various engineering disciplines to evaluate high-speed, dynamic impact and vibration events. Some of these applications require connecting rotating to stationary components. For example, bird impacts on rotating aircraft engine fan blades are a common analysis performed using this type of analysis tool. Traditionally, rotating machines utilize some type of bearing to allow rotation in one degree of freedom while offering constraints in the other degrees of freedom. Most times, bearings are modeled simply as linear springs with rotation. This is a simplification that is not necessarily accurate under the conditions of high-velocity, high-energy, dynamic events such as impact problems. For this reason, it is desirable to utilize a more realistic non-linear force-deflection characteristic of real bearings to model the interaction between rotating and non-rotating components during dynamic events. The present work describes a rolling element bearing model developed for use in non-linear, dynamic finite element analysis. This rolling element bearing model has been implemented in LS-DYNA as a new element, *ELEMENT_BEARING.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
Exploring the Factor Structure of Neurocognitive Measures in Older Individuals
Santos, Nadine Correia; Costa, Patrício Soares; Amorim, Liliana; Moreira, Pedro Silva; Cunha, Pedro; Cotter, Jorge; Sousa, Nuno
2015-01-01
Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the “best fit” model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate. PMID:25880732
Eaton, Jennifer L; Mohr, David C; Hodgson, Michael J; McPhaul, Kathleen M
2018-02-01
To describe development and validation of the work-related well-being (WRWB) index. Principal components analysis was performed using Federal Employee Viewpoint Survey (FEVS) data (N = 392,752) to extract variables representing worker well-being constructs. Confirmatory factor analysis was performed to verify factor structure. To validate the WRWB index, we used multiple regression analysis to examine relationships with burnout associated outcomes. Principal Components Analysis identified three positive psychology constructs: "Work Positivity", "Co-worker Relationships", and "Work Mastery". An 11 item index explaining 63.5% of variance was achieved. The structural equation model provided a very good fit to the data. Higher WRWB scores were positively associated with all three employee experience measures examined in regression models. The new WRWB index shows promise as a valid and widely accessible instrument to assess worker well-being.
NASA Astrophysics Data System (ADS)
Bonavita, M.; Torrisi, L.
2005-03-01
A new data assimilation system has been designed and implemented at the National Center for Aeronautic Meteorology and Climatology of the Italian Air Force (CNMCA) in order to improve its operational numerical weather prediction capabilities and provide more accurate guidance to operational forecasters. The system, which is undergoing testing before operational use, is based on an “observation space” version of the 3D-VAR method for the objective analysis component, and on the High Resolution Regional Model (HRM) of the Deutscher Wetterdienst (DWD) for the prognostic component. Notable features of the system include a completely parallel (MPI+OMP) implementation of the solution of analysis equations by a preconditioned conjugate gradient descent method; correlation functions in spherical geometry with thermal wind constraint between mass and wind field; derivation of the objective analysis parameters from a statistical analysis of the innovation increments.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
Astronomical component estimation (ACE v.1) by time-variant sinusoidal modeling
NASA Astrophysics Data System (ADS)
Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan
2016-09-01
Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on (fast) Fourier transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic can make it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. This drawback is circumvented by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach was proven useful to characterize audio signals (music and speech), which are non-stationary in nature. Paleoclimate proxy signals and audio signals share similar dynamics; the only difference is the frequency relationship between the different components. A harmonic-frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, this difference is irrelevant for the problem of separating simultaneous changes in amplitude and frequency. Using an approach with overlapping analysis frames, the model (Astronomical Component Estimation, version 1: ACE v.1) captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretations, whereas the latter are estimated by means of linear least-squares. As output, the model provides the orbital component waveform, either in the depth or time domain. Uncertainty analyses of the model estimates are performed using Monte Carlo simulations. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns reconstruct changes in accumulation rate, whereas amplitude modulation identifies eccentricity-modulated precession. The functioning of the time-variant sinusoidal model is illustrated and validated using a synthetic insolation signal. The new modeling approach is tested on two case studies: (1) a Pliocene-Pleistocene benthic δ18O record from Ocean Drilling Program (ODP) Site 846 and (2) a Danian magnetic susceptibility record from the Contessa Highway section, Gubbio, Italy.
Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components
NASA Technical Reports Server (NTRS)
1999-01-01
Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oang, Key Young; Yang, Cheolhee; Muniyappan, Srinivasan
Determination of the optimum kinetic model is an essential prerequisite for characterizing dynamics and mechanism of a reaction. Here, we propose a simple method, termed as singular value decomposition-aided pseudo principal-component analysis (SAPPA), to facilitate determination of the optimum kinetic model from time-resolved data by bypassing any need to examine candidate kinetic models. We demonstrate the wide applicability of SAPPA by examining three different sets of experimental time-resolved data and show that SAPPA can efficiently determine the optimum kinetic model. In addition, the results of SAPPA for both time-resolved X-ray solution scattering (TRXSS) and transient absorption (TA) data of themore » same protein reveal that global structural changes of protein, which is probed by TRXSS, may occur more slowly than local structural changes around the chromophore, which is probed by TA spectroscopy.« less
Developing a framework for transferring knowledge into action: a thematic analysis of the literature
Ward, Vicky; House, Allan; Hamer, Susan
2010-01-01
Objectives Although there is widespread agreement about the importance of transferring knowledge into action, we still lack high quality information about what works, in which settings and with whom. Whilst there are a large number of models and theories for knowledge transfer interventions, they are untested meaning that their applicability and relevance is largely unknown. This paper describes the development of a conceptual framework of translating knowledge into action and discusses how it can be used for developing a useful model of the knowledge transfer process. Methods A narrative review of the knowledge transfer literature identified 28 different models which explained all or part of the knowledge transfer process. The models were subjected to a thematic analysis to identify individual components and the types of processes used when transferring knowledge into action. The results were used to build a conceptual framework of the process. Results Five common components of the knowledge transfer process were identified: problem identification and communication; knowledge/research development and selection; analysis of context; knowledge transfer activities or interventions; and knowledge/research utilization. We also identified three types of knowledge transfer processes: a linear process; a cyclical process; and a dynamic multidirectional process. From these results a conceptual framework of knowledge transfer was developed. The framework illustrates the five common components of the knowledge transfer process and shows that they are connected via a complex, multidirectional set of interactions. As such the framework allows for the individual components to occur simultaneously or in any given order and to occur more than once during the knowledge transfer process. Conclusion Our framework provides a foundation for gathering evidence from case studies of knowledge transfer interventions. We propose that future empirical work is designed to test and refine the relevant importance and applicability of each of the components in order to build more useful models of knowledge transfer which can serve as a practical checklist for planning or evaluating knowledge transfer activities. PMID:19541874
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
Principal component analysis for fermionic critical points
NASA Astrophysics Data System (ADS)
Costa, Natanael C.; Hu, Wenjian; Bai, Z. J.; Scalettar, Richard T.; Singh, Rajiv R. P.
2017-11-01
We use determinant quantum Monte Carlo (DQMC), in combination with the principal component analysis (PCA) approach to unsupervised learning, to extract information about phase transitions in several of the most fundamental Hamiltonians describing strongly correlated materials. We first explore the zero-temperature antiferromagnet to singlet transition in the periodic Anderson model, the Mott insulating transition in the Hubbard model on a honeycomb lattice, and the magnetic transition in the 1/6-filled Lieb lattice. We then discuss the prospects for learning finite temperature superconducting transitions in the attractive Hubbard model, for which there is no sign problem. Finally, we investigate finite temperature charge density wave (CDW) transitions in the Holstein model, where the electrons are coupled to phonon degrees of freedom, and carry out a finite size scaling analysis to determine Tc. We examine the different behaviors associated with Hubbard-Stratonovich auxiliary field configurations on both the entire space-time lattice and on a single imaginary time slice, or other quantities, such as equal-time Green's and pair-pair correlation functions.
Coping with Trial-to-Trial Variability of Event Related Signals: A Bayesian Inference Approach
NASA Technical Reports Server (NTRS)
Ding, Mingzhou; Chen, Youghong; Knuth, Kevin H.; Bressler, Steven L.; Schroeder, Charles E.
2005-01-01
In electro-neurophysiology, single-trial brain responses to a sensory stimulus or a motor act are commonly assumed to result from the linear superposition of a stereotypic event-related signal (e.g. the event-related potential or ERP) that is invariant across trials and some ongoing brain activity often referred to as noise. To extract the signal, one performs an ensemble average of the brain responses over many identical trials to attenuate the noise. To date, h s simple signal-plus-noise (SPN) model has been the dominant approach in cognitive neuroscience. Mounting empirical evidence has shown that the assumptions underlying this model may be overly simplistic. More realistic models have been proposed that account for the trial-to-trial variability of the event-related signal as well as the possibility of multiple differentially varying components within a given ERP waveform. The variable-signal-plus-noise (VSPN) model, which has been demonstrated to provide the foundation for separation and characterization of multiple differentially varying components, has the potential to provide a rich source of information for questions related to neural functions that complement the SPN model. Thus, being able to estimate the amplitude and latency of each ERP component on a trial-by-trial basis provides a critical link between the perceived benefits of the VSPN model and its many concrete applications. In this paper we describe a Bayesian approach to deal with this issue and the resulting strategy is referred to as the differentially Variable Component Analysis (dVCA). We compare the performance of dVCA on simulated data with Independent Component Analysis (ICA) and analyze neurobiological recordings from monkeys performing cognitive tasks.
An integrated framework for modeling freight mode and route choice.
DOT National Transportation Integrated Search
2013-10-01
A number of statewide travel demand models have included freight as a separate component in analysis. Unlike : passenger travel, freight has not gained equivalent attention because of lack of data and difficulties in modeling. In : the current state ...
NASA Astrophysics Data System (ADS)
Golenko, Mariya; Golenko, Nikolay
2014-05-01
Numerical modeling of the currents' spatial structure in some regions of the Baltic Sea is performed on the base of POM (Princeton Ocean Model). The calculations were performed under the westerly (most frequent in the Baltic) and north-easterly wind forcings. In the regions adjacent to the Kaliningrad Region's, Polish and Lithuanian coasts these winds generate oppositely directed geostrophic, drift and others types of currents. On the whole these processes can be considered as downwelling and upwelling. Apart from the regions mentioned above the Slupsk Furrow region, which determines the mass and momentum exchange between the Western and Central Baltic, is also considered. During the analysis of currents not only the whole model velocity but also components directed along and across the barotropic geostrophic current velocity are considered. The along geostrophic component for one's turn is separated into the geostrophic current itself and an ageostrophic part. The across geostrophic component is totally ageostrophic. The velocity components directed along and across the geostrophic current approximately describe the velocity components directed along the coast (along isobathes) and from the coast towards the open sea. The suggested approach allowed to present the currents' spatial structures typical for different wind forcings as two maps with the components directed along and across the barotropic geostrophic current velocity. On these maps the areas of the intensive alongshore currents are clearly depicted (for ex. near the base of the Hel Spit, in the region of the Slupsk Sill). The combined analysis of the vectors of the whole and geostrophic velocities allows to reveal the areas where the geostrophic component is significantly strengthened or weakened by the ageostrophic component. Under the westerly wind such currents' features are clearly observed near the end of the Hel Spit and at the southern boarder of the Slupsk Sill, under the north-easterly wind - near the base of the Hel Spit, at the southern boarder of the Slupsk Furrow, near the Curonian Spit (where the relief is bent). On the maps presenting the spatial distributions of the across shore velocities the areas where the mass and momentum transport from the shore to the open sea in the surface layer and vice versa takes place are discriminated. There are also revealed the areas where sharp changes of different velocity components under the wind changes are expected as well as the areas where such changes are expected to be minimal. The model is validated using the field surveys of current velocities by ADCP in the area adjacent to the Kaliningrad region. The comparison of current velocities has shown a close correspondence. In rather wide area the directions and amplitudes of the model and ADCP surface velocities are close, that is additionally confirmed by the comparison of the local vorticity distributions. On the vertical transects of the ADCP current velocity directed across the shoreline the geostrophic jet is clearly pronounced. Its horizontal and vertical scales are in close correspondence with ones of the model jet. At that the more detail calculations which are allowed during the modeling have shown that the geostrophic currents amount to 40-60% (in average) of the whole velocity; two components of the ageostrophic velocity directed along and across the geostrophic velocity are highly variable (from 10 to 60% of the whole velocity). The ageostrophic component directed along the geostrophic current generally strengthens it (up to 20-40% in average and up to 60-70% near the end of the Hel Spit). But in some regions, for example, in the Slupsk Furrow the ageostrophic component slows down the geostrophic current (to 30-40%). In some narrow local areas immediately adjacent to the coast currents directed oppositely to the general quasi geostrophic jet were registered on both field and model data. Before the comparison with the field data these local jets revealed on the model data were considered as improbable. As a result, the comparative analysis of the field and model data led to more detail understanding of dynamic processes in some coastal parts of the Baltic Sea.
Demonstration of a Safety Analysis on a Complex System
NASA Technical Reports Server (NTRS)
Leveson, Nancy; Alfaro, Liliana; Alvarado, Christine; Brown, Molly; Hunt, Earl B.; Jaffe, Matt; Joslyn, Susan; Pinnell, Denise; Reese, Jon; Samarziya, Jeffrey;
1997-01-01
For the past 17 years, Professor Leveson and her graduate students have been developing a theoretical foundation for safety in complex systems and building a methodology upon that foundation. The methodology includes special management structures and procedures, system hazard analyses, software hazard analysis, requirements modeling and analysis for completeness and safety, special software design techniques including the design of human-machine interaction, verification, operational feedback, and change analysis. The Safeware methodology is based on system safety techniques that are extended to deal with software and human error. Automation is used to enhance our ability to cope with complex systems. Identification, classification, and evaluation of hazards is done using modeling and analysis. To be effective, the models and analysis tools must consider the hardware, software, and human components in these systems. They also need to include a variety of analysis techniques and orthogonal approaches: There exists no single safety analysis or evaluation technique that can handle all aspects of complex systems. Applying only one or two may make us feel satisfied, but will produce limited results. We report here on a demonstration, performed as part of a contract with NASA Langley Research Center, of the Safeware methodology on the Center-TRACON Automation System (CTAS) portion of the air traffic control (ATC) system and procedures currently employed at the Dallas/Fort Worth (DFW) TRACON (Terminal Radar Approach CONtrol). CTAS is an automated system to assist controllers in handling arrival traffic in the DFW area. Safety is a system property, not a component property, so our safety analysis considers the entire system and not simply the automated components. Because safety analysis of a complex system is an interdisciplinary effort, our team included system engineers, software engineers, human factors experts, and cognitive psychologists.
An integrated weather and sea-state forecasting system for the Arabian Peninsula (WASSF)
NASA Astrophysics Data System (ADS)
Kallos, George; Galanis, George; Spyrou, Christos; Mitsakou, Christina; Solomos, Stavros; Bartsotas, Nikolaos; Kalogrei, Christina; Athanaselis, Ioannis; Sofianos, Sarantis; Vervatis, Vassios; Axaopoulos, Panagiotis; Papapostolou, Alexandros; Qahtani, Jumaan Al; Alaa, Elyas; Alexiou, Ioannis; Beard, Daniel
2013-04-01
Nowadays, large industrial conglomerates such as the Saudi ARAMCO, require a series of weather and sea state forecasting products that cannot be found in state meteorological offices or even commercial data providers. The two major objectives of the system is prevention and mitigation of environmental problems and of course early warning of local conditions associated with extreme weather events. The management and operations part is related to early warning of weather and sea-state events that affect operations of various facilities. The environmental part is related to air quality and especially the desert dust levels in the atmosphere. The components of the integrated system include: (i) a weather and desert dust prediction system with forecasting horizon of 5 days, (ii) a wave analysis and prediction component for Red Sea and Arabian Gulf, (iii) an ocean circulation and tidal analysis and prediction of both Red Sea and Arabian Gulf and (iv) an Aviation part specializing in the vertical structure of the atmosphere and extreme events that affect air transport and other operations. Specialized data sets required for on/offshore operations are provided ate regular basis. State of the art modeling components are integrated to a unique system that distributes the produced analysis and forecasts to each department. The weather and dust prediction system is SKIRON/Dust, the wave analysis and prediction system is based on WAM cycle 4 model from ECMWF, the ocean circulation model is MICOM while the tidal analysis and prediction is a development of the Ocean Physics and Modeling Group of University of Athens, incorporating the Tidal Model Driver. A nowcasting subsystem is included. An interactive system based on Google Maps gives the capability to extract and display the necessary information for any location of the Arabian Peninsula, the Red Sea and Arabian Gulf.
Global model of zenith tropospheric delay proposed based on EOF analysis
NASA Astrophysics Data System (ADS)
Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng
2017-07-01
Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.
NASA Technical Reports Server (NTRS)
Kvaternik, R. G.
1976-01-01
The manner of representing a flight vehicle structure as an assembly of beam, spring, and rigid-body components for vibration analysis is described. The development is couched in terms of a substructures methodology which is based on the finite-element stiffness method. The particular manner of employing beam, spring, and rigid-body components to model such items as wing structures, external stores, pylons supporting engines or external stores, and sprung masses associated with launch vehicle fuel slosh is described by means of several simple qualitative examples. A detailed numerical example consisting of a tilt-rotor VTOL aircraft is included to provide a unified illustration of the procedure for representing a structure as an equivalent system of beams, springs, and rigid bodies, the manner of forming the substructure mass and stiffness matrices, and the mechanics of writing the equations of constraint which enforce deflection compatibility at the junctions of the substructures. Since many structures, or selected components of structures, can be represented in this manner for vibration analysis, the modeling concepts described and their application in the numerical example shown should prove generally useful to the dynamicist.
Towards the generation of a parametric foot model using principal component analysis: A pilot study.
Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan
2016-06-01
There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Chen, Ping; Harrington, Peter B
2008-02-01
A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data
Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark
2010-01-01
Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815
Standard surface-reflectance model and illuminant estimation
NASA Technical Reports Server (NTRS)
Tominaga, Shoji; Wandell, Brian A.
1989-01-01
A vector analysis technique was adopted to test the standard reflectance model. A computational model was developed to determine the components of the observed spectra and an estimate of the illuminant was obtained without using a reference white standard. The accuracy of the standard model is evaluated.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
A quantitative analysis of the F18 flight control system
NASA Technical Reports Server (NTRS)
Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann
1993-01-01
This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.
Management Accounting in School Food Service.
ERIC Educational Resources Information Center
Bryan, E. Lewis; Friedlob, G. Thomas
1982-01-01
Describes a model for establishing control of school food services through analysis of the aggregate variances of quantity, collection, and price, and of their separate components. The separable component variances are identified, measured, and compared monthly to help supervisors identify exactly where plans and operations vary. (Author/MLF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.
Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
NASA Astrophysics Data System (ADS)
Miller, Shelly L.; Anderson, Melissa J.; Daly, Eileen P.; Milford, Jana B.
Four receptor-oriented source apportionment models were evaluated by applying them to simulated personal exposure data for select volatile organic compounds (VOCs) that were generated by Monte Carlo sampling from known source contributions and profiles. The exposure sources modeled are environmental tobacco smoke, paint emissions, cleaning and/or pesticide products, gasoline vapors, automobile exhaust, and wastewater treatment plant emissions. The receptor models analyzed are chemical mass balance, principal component analysis/absolute principal component scores, positive matrix factorization (PMF), and graphical ratio analysis for composition estimates/source apportionment by factors with explicit restriction, incorporated in the UNMIX model. All models identified only the major contributors to total exposure concentrations. PMF extracted factor profiles that most closely represented the major sources used to generate the simulated data. None of the models were able to distinguish between sources with similar chemical profiles. Sources that contributed <5% to the average total VOC exposure were not identified.
Modulation by EEG features of BOLD responses to interictal epileptiform discharges
LeVan, Pierre; Tyvaert, Louise; Gotman, Jean
2013-01-01
Introduction EEG-fMRI of interictal epileptiform discharges (IEDs) usually assumes a fixed hemodynamic response function (HRF). This study investigates HRF variability with respect to IED amplitude fluctuations using independent component analysis (ICA), with the goal of improving the specificity of EEG-fMRI analyses. Methods We selected EEG-fMRI data from 10 focal epilepsy patients with a good quality EEG. IED amplitudes were calculated in an average reference montage. The fMRI data were decomposed by ICA and a deconvolution method identified IED-related components by detecting time courses with a significant HRF time-locked to the IEDs (F-test, p<0.05). Individual HRF amplitudes were then calculated for each IED. Components with a significant HRF/IED amplitude correlation (Spearman test, p< 0.05) were compared to the presumed epileptogenic focus and to results of a general linear model (GLM) analysis. Results In 7 patients, at least one IED-related component was concordant with the focus, but many IED-related components were at distant locations. When considering only components with a significant HRF/IED amplitude correlation, distant components could be discarded, significantly increasing the relative proportion of activated voxels in the focus (p=0.02). In the 3 patients without concordant IED-related components, no HRF/IED amplitude correlations were detected inside the brain. Integrating IED-related amplitudes in the GLM significantly improved fMRI signal modeling in the epileptogenic focus in 4 patients (p< 0.05). Conclusion Activations in the epileptogenic focus appear to show significant correlations between HRF and IED amplitudes, unlike distant responses. These correlations could be integrated in the analysis to increase the specificity of EEG-fMRI studies in epilepsy. PMID:20026222
Modeling energy/economy interactions for conservation and renewable energy-policy analysis
NASA Astrophysics Data System (ADS)
Groncki, P. J.
Energy policy and the implications for policy analysis and the methodological tools are discussed. The evolution of one methodological approach and the combined modeling system of the component models, their evolution in response to changing analytic needs, and the development of the integrated framework are reported. The analyses performed over the past several years are summarized. The current philosophy behind energy policy is discussed and compared to recent history. Implications for current policy analysis and methodological approaches are drawn.
Analysis of model Titan atmospheric components using ion mobility spectrometry
NASA Technical Reports Server (NTRS)
Kojiro, D. R.; Cohen, M. J.; Wernlund, R. F.; Stimac, R. M.; Humphry, D. E.; Takeuchi, N.
1991-01-01
The Gas Chromatograph-Ion Mobility Spectrometer (GC-IMS) was proposed as an analytical technique for the analysis of Titan's atmosphere during the Cassini Mission. The IMS is an atmospheric pressure, chemical detector that produces an identifying spectrum of each chemical species measured. When the IMS is combined with a GC as a GC-IMS, the GC is used to separate the sample into its individual components, or perhaps small groups of components. The IMS is then used to detect, quantify, and identify each sample component. Conventional IMS detection and identification of sample components depends upon a source of energetic radiation, such as beta radiation, which ionizes the atmospheric pressure host gas. This primary ionization initiates a sequence of ion-molecule reactions leading to the formation of sufficiently energetic positive or negative ions, which in turn ionize most constituents in the sample. In conventional IMS, this reaction sequence is dominated by the water cluster ion. However, many of the light hydrocarbons expected in Titan's atmosphere cannot be analyzed by IMS using this mechanism at the concentrations expected. Research at NASA Ames and PCP Inc., has demonstrated IMS analysis of expected Titan atmospheric components, including saturated aliphatic hydrocarbons, using two alternate sample ionizations mechanisms. The sensitivity of the IMS to hydrocarbons such as propane and butane was increased by several orders of magnitude. Both ultra dry (waterless) IMS sample ionization and metastable ionization were successfully used to analyze a model Titan atmospheric gas mixture.
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-01-01
Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-04-01
National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.
NASA Astrophysics Data System (ADS)
Jechumtálová, Z.; Šílený, J.; Trifu, C.-I.
2014-06-01
The resolution of event mechanism is investigated in terms of the unconstrained moment tensor (MT) source model and the shear-tensile crack (STC) source model representing a slip along the fault with an off-plane component. Data are simulated as recorded by the actual seismic array installed at Ocnele Mari (Romania), where sensors are placed in shallow boreholes. Noise is included as superimposed on synthetic data, and the analysis explores how the results are influenced (i) by data recorded by the complete seismic array compared to that provided by the subarray of surface sensors, (ii) by using three- or one-component sensors and (iii) by inverting P- and S-wave amplitudes versus P-wave amplitudes only. The orientation of the pure shear fracture component is resolved almost always well. On the other hand, the noise increase distorts the non-double-couple components (non-DC) of the MT unless a high-quality data set is available. The STC source model yields considerably less spurious non-shear fracture components. Incorporating recordings at deeper sensors in addition to those obtained from the surface ones allows for the processing of noisier data. Performance of the network equipped with three-component sensors is only slightly better than that with uniaxial sensors. Inverting both P- and S-wave amplitudes compared to the inversion of P-wave amplitudes only markedly improves the resolution of the orientation of the source mechanism. Comparison of the inversion results for the two alternative source models permits the assessment of the reliability of non-shear components retrieved. As example, the approach is investigated on three microseismic events occurred at Ocnele Mari, where both large and small non-DC components were found. The analysis confirms a tensile fracturing for two of these events, and a shear slip for the third.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippett, Michael K.
2014-04-09
This report is a progress report of the accomplishments of the research grant “Collaborative Research: Separating Forced and Unforced Decadal Predictability in Models and Observa- tions” during the period 1 May 2011- 31 August 2013. This project is a collaborative one between Columbia University and George Mason University. George Mason University will submit a final technical report at the conclusion of their no-cost extension. The purpose of the proposed research is to identify unforced predictable components on decadal time scales, distinguish these components from forced predictable components, and to assess the reliability of model predictions of these components. Components ofmore » unforced decadal predictability will be isolated by maximizing the Average Predictability Time (APT) in long, multimodel control runs from state-of-the-art climate models. Components with decadal predictability have large APT, so maximizing APT ensures that components with decadal predictability will be detected. Optimal fingerprinting techniques, as used in detection and attribution analysis, will be used to separate variations due to natural and anthropogenic forcing from those due to unforced decadal predictability. This methodology will be applied to the decadal hindcasts generated by the CMIP5 project to assess the reliability of model projections. The question of whether anthropogenic forcing changes decadal predictability, or gives rise to new forms of decadal predictability, also will be investigated.« less
Hermida, Juan C; Flores-Hernandez, Cesar; Hoenecke, Heinz R; D'Lima, Darryl D
2014-03-01
This study undertook a computational analysis of a wedged glenoid component for correction of retroverted glenoid arthritic deformity to determine whether a wedge-shaped glenoid component design with a built-in correction for version reduces excessive stresses in the implant, cement, and glenoid bone. Recommendations for correcting retroversion deformity are asymmetric reaming of the anterior glenoid, bone grafting of the posterior glenoid, or a glenoid component with posterior augmentation. Eccentric reaming has the disadvantages of removing normal bone, reducing structural support for the glenoid component, and increasing the risk of bone perforation by the fixation pegs. Bone grafting to correct retroverted deformity does not consistently generate successful results. Finite element models of 2 scapulae models representing a normal and an arthritic retroverted glenoid were implanted with a standard glenoid component (in retroversion or neutral alignment) or a wedged component. Glenohumeral forces representing in vivo loading were applied and stresses and strains computed in the bone, cement, and glenoid component. The retroverted glenoid components generated the highest compressive stresses and decreased cyclic fatigue life predictions for trabecular bone. Correction of retroversion by the wedged glenoid component significantly decreased stresses and predicted greater bone fatigue life. The cement volume estimated to survive 10 million cycles was the lowest for the retroverted components and the highest for neutrally implanted glenoid components and for wedged components. A wedged glenoid implant is a viable option to correct severe arthritic retroversion, reducing the need for eccentric reaming and the risk for implant failure. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
NASA Astrophysics Data System (ADS)
Milde, Ján; Morovič, Ladislav
2016-09-01
The paper investigates the influence of infill (internal structures of components) in the Fused Deposition Modeling (FDM) method on dimensional and geometrical accuracy of components. The components in this case were real models of human mandible, which were obtained by Computed Tomography (CT) mostly used in medical applications. In the production phase, the device used for manufacturing, was a 3D printer Zortrax M200 based on the FDM technology. In the second phase, the mandibles made by the printer, were digitized using optical scanning device of GOM ATOS Triple Scan II. They were subsequently evaluated in the final phase. The practical part of this article describes the procedure of jaw model modification, the production of components using a 3D printer, the procedure of digitization of printed parts by optical scanning device and the procedure of comparison. The outcome of this article is a comparative analysis of individual printed parts, containing tables with mean deviations for individual printed parts, as well as tables for groups of printed parts with the same infill parameter.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Construction of a Cyber Attack Model for Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varuttamaseni, Athi; Bari, Robert A.; Youngblood, Robert
The consideration of how one compromised digital equipment can impact neighboring equipment is critical to understanding the progression of cyber attacks. The degree of influence that one component may have on another depends on a variety of factors, including the sharing of resources such as network bandwidth or processing power, the level of trust between components, and the inclusion of segmentation devices such as firewalls. The interactions among components via mechanisms that are unique to the digital world are not usually considered in traditional PRA. This means potential sequences of events that may occur during an attack may be missedmore » if one were to only look at conventional accident sequences. This paper presents a method where, starting from the initial attack vector, the progression of a cyber attack can be modeled. The propagation of the attack is modeled by considering certain attributes of the digital components in the system. These attributes determine the potential vulnerability of a component to a class of attack and the capability gained by the attackers once they are in control of the equipment. The use of attributes allows similar components (components with the same set of attributes) to be modeled in the same way, thereby reducing the computing resources required for analysis of large systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T.
Cyber physical computing infrastructures typically consist of a number of sites are interconnected. Its operation critically depends both on cyber components and physical components. Both types of components are subject to attacks of different kinds and frequencies, which must be accounted for the initial provisioning and subsequent operation of the infrastructure via information security analysis. Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, andmore » information assets. We concentrated our analysis on the electric sector failure scenarios and impact analyses by the NESCOR Working Group Study, From the Section 5 electric sector representative failure scenarios; we extracted the four generic failure scenarios and grouped them into three specific threat categories (confidentiality, integrity, and availability) to the system. These specific failure scenarios serve as a demonstration of our simulation. The analysis using our ABGT simulation demonstrates how to model the electric sector functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the cyber physical infrastructure network with respect to CIA.« less
Structural dynamic analysis of the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Scott, L. P.; Jamison, G. T.; Mccutcheon, W. A.; Price, J. M.
1981-01-01
This structural dynamic analysis supports development of the SSME by evaluating components subjected to critical dynamic loads, identifying significant parameters, and evaluating solution methods. Engine operating parameters at both rated and full power levels are considered. Detailed structural dynamic analyses of operationally critical and life limited components support the assessment of engine design modifications and environmental changes. Engine system test results are utilized to verify analytic model simulations. The SSME main chamber injector assembly is an assembly of 600 injector elements which are called LOX posts. The overall LOX post analysis procedure is shown.
Full waveform inversion using a decomposed single frequency component from a spectrogram
NASA Astrophysics Data System (ADS)
Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon
2018-06-01
Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.
Rautenberg, Tamlyn Anne; Zerwes, Ute; Lee, Way Seah
2018-01-01
Objective To perform cost utility (CU) and budget impact (BI) analyses augmented by scenario analyses of critical model structure components to evaluate racecadotril as adjuvant to oral rehydration solution (ORS) for children under 5 years with acute diarrhea in Malaysia. Methods A CU model was adapted to evaluate racecadotril plus ORS vs ORS alone for acute diarrhea in children younger than 5 years from a Malaysian public payer’s perspective. A bespoke BI analysis was undertaken in addition to detailed scenario analyses with respect to critical model structure components. Results According to the CU model, the intervention is less costly and more effective than comparator for the base case with a dominant incremental cost-effectiveness ratio of −RM 1,272,833/quality-adjusted life year (USD −312,726/quality-adjusted life year) in favor of the intervention. According to the BI analysis (assuming an increase of 5% market share per year for racecadotril+ORS for 5 years), the total cumulative incremental percentage reduction in health care expenditure for diarrhea in children is 0.136578%, resulting in a total potential cumulative cost savings of −RM 73,193,603 (USD −17,983,595) over a 5-year period. Results hold true across a range of plausible scenarios focused on critical model components. Conclusion Adjuvant racecadotril vs ORS alone is potentially cost-effective from a Malaysian public payer perspective subject to the assumptions and limitations of the model. BI analysis shows that this translates into potential cost savings for the Malaysian public health care system. Results hold true at evidence-based base case values and over a range of alternate scenarios. PMID:29588606
Multi-spectrometer calibration transfer based on independent component analysis.
Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong
2018-02-26
Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
Designers' models of the human-computer interface
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Breedin, Sarah D.
1993-01-01
Understanding design models of the human-computer interface (HCI) may produce two types of benefits. First, interface development often requires input from two different types of experts: human factors specialists and software developers. Given the differences in their backgrounds and roles, human factors specialists and software developers may have different cognitive models of the HCI. Yet, they have to communicate about the interface as part of the design process. If they have different models, their interactions are likely to involve a certain amount of miscommunication. Second, the design process in general is likely to be guided by designers' cognitive models of the HCI, as well as by their knowledge of the user, tasks, and system. Designers do not start with a blank slate; rather they begin with a general model of the object they are designing. The author's approach to a design model of the HCI was to have three groups make judgments of categorical similarity about the components of an interface: human factors specialists with HCI design experience, software developers with HCI design experience, and a baseline group of computer users with no experience in HCI design. The components of the user interface included both display components such as windows, text, and graphics, and user interaction concepts, such as command language, editing, and help. The judgments of the three groups were analyzed using hierarchical cluster analysis and Pathfinder. These methods indicated, respectively, how the groups categorized the concepts, and network representations of the concepts for each group. The Pathfinder analysis provides greater information about local, pairwise relations among concepts, whereas the cluster analysis shows global, categorical relations to a greater extent.
Design and performance analysis of gas sorption compressors
NASA Technical Reports Server (NTRS)
Chan, C. K.
1984-01-01
Compressor kinetics based on gas adsorption and desorption processes by charcoal and for gas absorption and desorption processes by LaNi5 were analyzed using a two-phase model and a three-component model, respectively. The assumption of the modeling involved thermal and mechanical equilibria between phases or among the components. The analyses predicted performance well for compressors which have heaters located outside the adsorbent or the absorbent bed. For the rapidly-cycled compressor, where the heater was centrally located, only the transient pressure compared well with the experimental data.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
Xu, J; Durand, L G; Pibarot, P
2000-10-01
This paper describes a new approach based on the time-frequency representation of transient nonlinear chirp signals for modeling the aortic (A2) and the pulmonary (P2) components of the second heart sound (S2). It is demonstrated that each component is a narrow-band signal with decreasing instantaneous frequency defined by its instantaneous amplitude and its instantaneous phase. Each component is also a polynomial phase signal, the instantaneous phase of which can be accurately represented by a polynomial having an order of thirty. A dechirping approach is used to obtain the instantaneous amplitude of each component while reducing the effect of the background noise. The analysis-synthesis procedure is applied to 32 isolated A2 and 32 isolated P2 components recorded in four pigs with pulmonary hypertension. The mean +/- standard deviation of the normalized root-mean-squared error (NRMSE) and the correlation coefficient (rho) between the original and the synthesized signal components were: NRMSE = 2.1 +/- 0.3% and rho = 0.97 +/- 0.02 for A2 and NRMSE = 2.52 +/- 0.5% and rho = 0.96 +/- 0.02 for P2. These results confirm that each component can be modeled as mono-component nonlinear chirp signals of short duration with energy distributions concentrated along its decreasing instantaneous frequency.
Revealing the microstructure of the giant component in random graph ensembles
NASA Astrophysics Data System (ADS)
Tishby, Ido; Biham, Ofer; Katzav, Eytan; Kühn, Reimer
2018-04-01
The microstructure of the giant component of the Erdős-Rényi network and other configuration model networks is analyzed using generating function methods. While configuration model networks are uncorrelated, the giant component exhibits a degree distribution which is different from the overall degree distribution of the network and includes degree-degree correlations of all orders. We present exact analytical results for the degree distributions as well as higher-order degree-degree correlations on the giant components of configuration model networks. We show that the degree-degree correlations are essential for the integrity of the giant component, in the sense that the degree distribution alone cannot guarantee that it will consist of a single connected component. To demonstrate the importance and broad applicability of these results, we apply them to the study of the distribution of shortest path lengths on the giant component, percolation on the giant component, and spectra of sparse matrices defined on the giant component. We show that by using the degree distribution on the giant component one obtains high quality results for these properties, which can be further improved by taking the degree-degree correlations into account. This suggests that many existing methods, currently used for the analysis of the whole network, can be adapted in a straightforward fashion to yield results conditioned on the giant component.
NASA Technical Reports Server (NTRS)
Perry, Bruce; Anderson, Molly
2015-01-01
The Cascade Distillation Subsystem (CDS) is a rotary multistage distiller being developed to serve as the primary processor for wastewater recovery during long-duration space missions. The CDS could be integrated with a system similar to the International Space Station (ISS) Water Processor Assembly (WPA) to form a complete Water Recovery System (WRS) for future missions. Independent chemical process simulations with varying levels of detail have previously been developed using Aspen Custom Modeler (ACM) to aid in the analysis of the CDS and several WPA components. The existing CDS simulation could not model behavior during thermal startup and lacked detailed analysis of several key internal processes, including heat transfer between stages. The first part of this paper describes modifications to the ACM model of the CDS that improve its capabilities and the accuracy of its predictions. Notably, the modified version of the model can accurately predict behavior during thermal startup for both NaCl solution and pretreated urine feeds. The model is used to predict how changing operating parameters and design features of the CDS affects its performance, and conclusions from these predictions are discussed. The second part of this paper describes the integration of the modified CDS model and the existing WPA component models into a single WRS model. The integrated model is used to demonstrate the effects that changes to one component can have on the dynamic behavior of the system as a whole.
Localization in covariance matrices of coupled heterogenous Ornstein-Uhlenbeck processes
NASA Astrophysics Data System (ADS)
Barucca, Paolo
2014-12-01
We define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix. The spectral properties of this ensemble are shown to be in qualitative agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time series. Furthermore evidence for a localization transition in eigenvectors related to small and large eigenvalues in cross-correlations analysis of this model is found, and a simple explanation of localization phenomena in financial time series is provided. Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high- and low-temperature components are the most localized ones.
A topological multilayer model of the human body.
Barbeito, Antonio; Painho, Marco; Cabral, Pedro; O'Neill, João
2015-11-04
Geographical information systems deal with spatial databases in which topological models are described with alphanumeric information. Its graphical interfaces implement the multilayer concept and provide powerful interaction tools. In this study, we apply these concepts to the human body creating a representation that would allow an interactive, precise, and detailed anatomical study. A vector surface component of the human body is built using a three-dimensional (3-D) reconstruction methodology. This multilayer concept is implemented by associating raster components with the corresponding vector surfaces, which include neighbourhood topology enabling spatial analysis. A root mean square error of 0.18 mm validated the three-dimensional reconstruction technique of internal anatomical structures. The expansion of the identification and the development of a neighbourhood analysis function are the new tools provided in this model.
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
Barba, Lida; Sánchez-Macías, Davinia; Barba, Iván; Rodríguez, Nibaldo
2018-06-01
Guinea pig meat consumption is increasing exponentially worldwide. The evaluation of the contribution of carcass components to carcass quality potentially can allow for the estimation of the value added to food animal origin and make research in guinea pigs more practicable. The aim of this study was to propose a methodology for modelling the contribution of different carcass components to the overall carcass quality of guinea pigs by using non-invasive pre- and post mortem carcass measurements. The selection of predictors was developed through correlation analysis and statistical significance; whereas the prediction models were based on Multiple Linear Regression. The prediction results showed higher accuracy in the prediction of carcass component contribution expressed in grams, compared to when expressed as a percentage of carcass quality components. The proposed prediction models can be useful for the guinea pig meat industry and research institutions by using non-invasive and time- and cost-efficient carcass component measuring techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.
Peng, Mingguo; Li, Huajie; Li, Dongdong; Du, Erdeng; Li, Zhihong
2017-06-01
Carbon nanotubes (CNTs) were utilized to adsorb DOM in micro-polluted water. The characteristics of DOM adsorption on CNTs were investigated based on UV 254 , TOC, and fluorescence spectrum measurements. Based on PARAFAC (parallel factor) analysis, four fluorescent components were extracted, including one protein-like component (C4) and three humic acid-like components (C1, C2, and C3). The adsorption isotherms, kinetics, and thermodynamics of DOM adsorption on CNTs were further investigated. A Freundlich isotherm model fit the adsorption data well with high values of correlation. As a type of macro-porous and meso-porous adsorbent, CNTs preferably adsorb humic acid-like substances rather than protein-like substances. The increasing temperature will speed up the adsorption process. The self-organizing map (SOM) analysis further explains the fluorescent properties of water samples. The results provide a new insight into the adsorption behaviour of DOM fluorescent components on CNTs.
Peleato, Nicolás M; Andrews, Robert C
2015-01-01
This work investigated the application of several fluorescence excitation-emission matrix analysis methods as natural organic matter (NOM) indicators for use in predicting the formation of trihalomethanes (THMs) and haloacetic acids (HAAs). Waters from four different sources (two rivers and two lakes) were subjected to jar testing followed by 24hr disinfection by-product formation tests using chlorine. NOM was quantified using three common measures: dissolved organic carbon, ultraviolet absorbance at 254 nm, and specific ultraviolet absorbance as well as by principal component analysis, peak picking, and parallel factor analysis of fluorescence spectra. Based on multi-linear modeling of THMs and HAAs, principle component (PC) scores resulted in the lowest mean squared prediction error of cross-folded test sets (THMs: 43.7 (μg/L)(2), HAAs: 233.3 (μg/L)(2)). Inclusion of principle components representative of protein-like material significantly decreased prediction error for both THMs and HAAs. Parallel factor analysis did not identify a protein-like component and resulted in prediction errors similar to traditional NOM surrogates as well as fluorescence peak picking. These results support the value of fluorescence excitation-emission matrix-principal component analysis as a suitable NOM indicator in predicting the formation of THMs and HAAs for the water sources studied. Copyright © 2014. Published by Elsevier B.V.
Mental health stigmatisation in deployed UK Armed Forces: a principal components analysis.
Fertout, Mohammed; Jones, N; Keeling, M; Greenberg, N
2015-12-01
UK military research suggests that there is a significant link between current psychological symptoms, mental health stigmatisation and perceived barriers to care (stigma/BTC). Few studies have explored the construct of stigma/BTC in depth amongst deployed UK military personnel. Three survey datasets containing a stigma/BTC scale obtained during UK deployments to Iraq and Afghanistan were combined (n=3405 personnel). Principal component analysis was used to identify the key components of stigma/BTC. The relationship between psychological symptoms, the stigma/BTC components and help seeking were examined. Two components were identified: 'potential loss of personal military credibility and trust' (stigma Component 1, five items, 49.4% total model variance) and 'negative perceptions of mental health services and barriers to help seeking' (Component 2, six items, 11.2% total model variance). Component 1 was endorsed by 37.8% and Component 2 by 9.4% of personnel. Component 1 was associated with both assessed and subjective mental health, medical appointments and admission to hospital. Stigma Component 2 was associated with subjective and assessed mental health but not with medical appointments. Neither component was associated with help-seeking for subjective psycho-social problems. Potential loss of credibility and trust appeared to be associated with help-seeking for medical reasons but not for help-seeking for subjective psychosocial problems. Those experiencing psychological symptoms appeared to minimise the effects of stigma by seeking out a socially acceptable route into care, such as the medical consultation, whereas those who experienced a subjective mental health problem appeared willing to seek help from any source. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
The TEF modeling and analysis approach to advance thermionic space power technology
NASA Astrophysics Data System (ADS)
Marshall, Albert C.
1997-01-01
Thermionics space power systems have been proposed as advanced power sources for future space missions that require electrical power levels significantly above the capabilities of current space power systems. The Defense Special Weapons Agency's (DSWA) Thermionic Evaluation Facility (TEF) is carrying out both experimental and analytical research to advance thermionic space power technology to meet this expected need. A Modeling and Analysis (M&A) project has been created at the TEF to develop analysis tools, evaluate concepts, and guide research. M&A activities are closely linked to the TEF experimental program, providing experiment support and using experimental data to validate models. A planning exercise has been completed for the M&A project, and a strategy for implementation was developed. All M&A activities will build on a framework provided by a system performance model for a baseline Thermionic Fuel Element (TFE) concept. The system model is composed of sub-models for each of the system components and sub-systems. Additional thermionic component options and model improvements will continue to be incorporated in the basic system model during the course of the program. All tasks are organized into four focus areas: 1) system models, 2) thermionic research, 3) alternative concepts, and 4) documentation and integration. The M&A project will provide a solid framework for future thermionic system development.
von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne
2014-12-01
To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.
Development of a unified constitutive model for an isotropic nickel base superalloy Rene 80
NASA Technical Reports Server (NTRS)
Ramaswamy, V. G.; Vanstone, R. H.; Laflen, J. H.; Stouffer, D. C.
1988-01-01
Accurate analysis of stress-strain behavior is of critical importance in the evaluation of life capabilities of hot section turbine engine components such as turbine blades and vanes. The constitutive equations used in the finite element analysis of such components must be capable of modeling a variety of complex behavior exhibited at high temperatures by cast superalloys. The classical separation of plasticity and creep employed in most of the finite element codes in use today is known to be deficient in modeling elevated temperature time dependent phenomena. Rate dependent, unified constitutive theories can overcome many of these difficulties. A new unified constitutive theory was developed to model the high temperature, time dependent behavior of Rene' 80 which is a cast turbine blade and vane nickel base superalloy. Considerations in model development included the cyclic softening behavior of Rene' 80, rate independence at lower temperatures and the development of a new model for static recovery.
NASA Technical Reports Server (NTRS)
Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray
1992-01-01
Controlled Ecological Life Support System (CELSS) technology is critical to the Space Exploration Initiative. NASA's Kennedy Space Center has been performing CELSS research for several years, developing data related to CELSS design. We have developed OCAM (Object-oriented CELSS Analysis and Modeling), a CELSS modeling tool, and have used this tool to evaluate CELSS concepts, using this data. In using OCAM, a CELSS is broken down into components, and each component is modeled as a combination of containers, converters, and gates which store, process, and exchange carbon, hydrogen, and oxygen on a daily basis. Multiple crops and plant types can be simulated. Resource recovery options modeled include combustion, leaching, enzyme treatment, aerobic or anaerobic digestion, and mushroom and fish growth. Results include printouts and time-history graphs of total system mass, biomass, carbon dioxide, and oxygen quantities; energy consumption; and manpower requirements. The contributions of mass, energy, and manpower to system cost have been analyzed to compare configurations and determine appropriate research directions.
Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell
2008-07-01
Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.
Non-rigid image registration using a statistical spline deformation model.
Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul
2003-07-01
We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Wang, Yuanjia; Chen, Huaihou
2012-01-01
Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801
Wang, Yuanjia; Chen, Huaihou
2012-12-01
We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Neural Networks for Rapid Design and Analysis
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Maghami, Peiman G.
1998-01-01
Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.
Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM
NASA Technical Reports Server (NTRS)
Crane, Robert G.; Hewitson, Bruce
1990-01-01
Model simulations of global climate change are seen as an essential component of any program aimed at understanding human impact on the global environment. A major weakness of current general circulation models (GCMs), however, is their inability to predict reliably the regional consequences of a global scale change, and it is these regional scale predictions that are necessary for studies of human/environmental response. This research is directed toward the development of a methodology for the validation of the synoptic scale climatology of GCMs. This is developed with regard to the Goddard Institute for Space Studies (GISS) GCM Model 2, with the specific objective of using the synoptic circulation form a doubles CO2 simulation to estimate regional climate change over North America, south of Hudson Bay. This progress report is specifically concerned with validating the synoptic climatology of the GISS GCM, and developing the transfer function to derive grid-point temperatures from the synoptic circulation. Principal Components Analysis is used to characterize the primary modes of the spatial and temporal variability in the observed and simulated climate, and the model validation is based on correlations between component loadings, and power spectral analysis of the component scores. The results show that the high resolution GISS model does an excellent job of simulating the synoptic circulation over the U.S., and that grid-point temperatures can be predicted with reasonable accuracy from the circulation patterns.
Computer models and output, Spartan REM: Appendix B
NASA Technical Reports Server (NTRS)
Marlowe, D. S.; West, E. J.
1984-01-01
A computer model of the Spartan Release Engagement Mechanism (REM) is presented in a series of numerical charts and engineering drawings. A crack growth analysis code is used to predict the fracture mechanics of critical components.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
NASA Astrophysics Data System (ADS)
Kolski, Jeffrey
The linear lattice properties of the Proton Storage Ring (PSR) at the Los Alamos Neutron Science Center (LANSCE) in Los Alamos, NM were measured and applied to determine a better linear accelerator model. We found that the initial model was deficient in predicting the vertical focusing strength. The additional vertical focusing was located through fundamental understanding of experiment and statistically rigorous analysis. An improved model was constructed and compared against the initial model and measurement at operation set points and set points far away from nominal and was shown to indeed be an enhanced model. Independent component analysis (ICA) is a tool for data mining in many fields of science. Traditionally, ICA is applied to turn-by-turn beam position data as a means to measure the lattice functions of the real machine. Due to the diagnostic setup for the PSR, this method is not applicable. A new application method for ICA is derived, ICA applied along the length of the bunch. The ICA modes represent motions within the beam pulse. Several of the dominate ICA modes are experimentally identified.
Finite Element Model Development and Validation for Aircraft Fuselage Structures
NASA Technical Reports Server (NTRS)
Buehrle, Ralph D.; Fleming, Gary A.; Pappa, Richard S.; Grosveld, Ferdinand W.
2000-01-01
The ability to extend the valid frequency range for finite element based structural dynamic predictions using detailed models of the structural components and attachment interfaces is examined for several stiffened aircraft fuselage structures. This extended dynamic prediction capability is needed for the integration of mid-frequency noise control technology. Beam, plate and solid element models of the stiffener components are evaluated. Attachment models between the stiffener and panel skin range from a line along the rivets of the physical structure to a constraint over the entire contact surface. The finite element models are validated using experimental modal analysis results. The increased frequency range results in a corresponding increase in the number of modes, modal density and spatial resolution requirements. In this study, conventional modal tests using accelerometers are complemented with Scanning Laser Doppler Velocimetry and Electro-Optic Holography measurements to further resolve the spatial response characteristics. Whenever possible, component and subassembly modal tests are used to validate the finite element models at lower levels of assembly. Normal mode predictions for different finite element representations of components and assemblies are compared with experimental results to assess the most accurate techniques for modeling aircraft fuselage type structures.
Bayesian analysis of anisotropic cosmologies: Bianchi VIIh and WMAP
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Josset, T.; Feeney, S. M.; Peiris, H. V.; Lasenby, A. N.
2013-12-01
We perform a definitive analysis of Bianchi VIIh cosmologies with Wilkinson Microwave Anisotropy Probe (WMAP) observations of the cosmic microwave background (CMB) temperature anisotropies. Bayesian analysis techniques are developed to study anisotropic cosmologies using full-sky and partial-sky masked CMB temperature data. We apply these techniques to analyse the full-sky internal linear combination (ILC) map and a partial-sky masked W-band map of WMAP 9 yr observations. In addition to the physically motivated Bianchi VIIh model, we examine phenomenological models considered in previous studies, in which the Bianchi VIIh parameters are decoupled from the standard cosmological parameters. In the two phenomenological models considered, Bayes factors of 1.7 and 1.1 units of log-evidence favouring a Bianchi component are found in full-sky ILC data. The corresponding best-fitting Bianchi maps recovered are similar for both phenomenological models and are very close to those found in previous studies using earlier WMAP data releases. However, no evidence for a phenomenological Bianchi component is found in the partial-sky W-band data. In the physical Bianchi VIIh model, we find no evidence for a Bianchi component: WMAP data thus do not favour Bianchi VIIh cosmologies over the standard Λ cold dark matter (ΛCDM) cosmology. It is not possible to discount Bianchi VIIh cosmologies in favour of ΛCDM completely, but we are able to constrain the vorticity of physical Bianchi VIIh cosmologies at (ω/H)0 < 8.6 × 10-10 with 95 per cent confidence.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
The promise of the state space approach to time series analysis for nursing research.
Levy, Janet A; Elser, Heather E; Knobel, Robin B
2012-01-01
Nursing research, particularly related to physiological development, often depends on the collection of time series data. The state space approach to time series analysis has great potential to answer exploratory questions relevant to physiological development but has not been used extensively in nursing. The aim of the study was to introduce the state space approach to time series analysis and demonstrate potential applicability to neonatal monitoring and physiology. We present a set of univariate state space models; each one describing a process that generates a variable of interest over time. Each model is presented algebraically and a realization of the process is presented graphically from simulated data. This is followed by a discussion of how the model has been or may be used in two nursing projects on neonatal physiological development. The defining feature of the state space approach is the decomposition of the series into components that are functions of time; specifically, slowly varying level, faster varying periodic, and irregular components. State space models potentially simulate developmental processes where a phenomenon emerges and disappears before stabilizing, where the periodic component may become more regular with time, or where the developmental trajectory of a phenomenon is irregular. The ultimate contribution of this approach to nursing science will require close collaboration and cross-disciplinary education between nurses and statisticians.
Impact analysis of composite aircraft structures
NASA Technical Reports Server (NTRS)
Pifko, Allan B.; Kushner, Alan S.
1993-01-01
The impact analysis of composite aircraft structures is discussed. Topics discussed include: background remarks on aircraft crashworthiness; comments on modeling strategies for crashworthiness simulation; initial study of simulation of progressive failure of an aircraft component constructed of composite material; and research direction in composite characterization for impact analysis.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks
NASA Astrophysics Data System (ADS)
Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco
The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.
Steindl, Theodora M; Crump, Carolyn E; Hayden, Frederick G; Langer, Thierry
2005-10-06
The development and application of a sophisticated virtual screening and selection protocol to identify potential, novel inhibitors of the human rhinovirus coat protein employing various computer-assisted strategies are described. A large commercially available database of compounds was screened using a highly selective, structure-based pharmacophore model generated with the program Catalyst. A docking study and a principal component analysis were carried out within the software package Cerius and served to validate and further refine the obtained results. These combined efforts led to the selection of six candidate structures, for which in vitro anti-rhinoviral activity could be shown in a biological assay.
Modelling safety of multistate systems with ageing components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna
An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics ofmore » the consecutive “m out of n: F” is presented as well.« less
System Modeling of Lunar Oxygen Production: Mass and Power Requirements
NASA Technical Reports Server (NTRS)
Steffen, Christopher J.; Freeh, Joshua E.; Linne, Diane L.; Faykus, Eric W.; Gallo, Christopher A.; Green, Robert D.
2007-01-01
A systems analysis tool for estimating the mass and power requirements for a lunar oxygen production facility is introduced. The individual modeling components involve the chemical processing and cryogenic storage subsystems needed to process a beneficiated regolith stream into liquid oxygen via ilmenite reduction. The power can be supplied from one of six different fission reactor-converter systems. A baseline system analysis, capable of producing 15 metric tons of oxygen per annum, is presented. The influence of reactor-converter choice was seen to have a small but measurable impact on the system configuration and performance. Finally, the mission concept of operations can have a substantial impact upon individual component size and power requirements.
An analysis method for multi-component airfoils in separated flow
NASA Technical Reports Server (NTRS)
Rao, B. M.; Duorak, F. A.; Maskew, B.
1980-01-01
The multi-component airfoil program (Langley-MCARF) for attached flow is modified to accept the free vortex sheet separation-flow model program (Analytical Methods, Inc.-CLMAX). The viscous effects are incorporated into the calculation by representing the boundary layer displacement thickness with an appropriate source distribution. The separation flow model incorporated into MCARF was applied to single component airfoils. Calculated pressure distributions for angles of attack up to the stall are in close agreement with experimental measurements. Even at higher angles of attack beyond the stall, correct trends of separation, decrease in lift coefficients, and increase in pitching moment coefficients are predicted.
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
Dynamic analysis using superelements for a large helicopter model
NASA Technical Reports Server (NTRS)
Patel, M. P.; Shah, L. C.
1978-01-01
Using superelements (substructures), modal and frequency response analysis was performed for a large model of the Advanced Attack Helicopter developed for the U.S. Army. Whiffletree concept was employed so that the residual structure along with the various superelements could be represented as beam-like structures for economical and accurate dynamic analysis. A very large DMAP alter to the rigid format was developed so that the modal analysis, the frequency response, and the strain energy in each component could be computed in the same run.
A Model of Small Group Facilitator Competencies
ERIC Educational Resources Information Center
Kolb, Judith A.; Jin, Sungmi; Song, Ji Hoon
2008-01-01
This study used small group theory, quantitative and qualitative data collected from experienced practicing facilitators at three points of time, and a building block process of collection, analysis, further collection, and consolidation to develop a model of small group facilitator competencies. The proposed model has five components:…
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Guo, Jin-Cheng; Wu, Yang; Chen, Yang; Pan, Feng; Wu, Zhi-Yong; Zhang, Jia-Sheng; Wu, Jian-Yi; Xu, Xiu-E; Zhao, Jian-Mei; Li, En-Min; Zhao, Yi; Xu, Li-Yan
2018-04-09
Esophageal squamous cell carcinoma (ESCC) is the predominant subtype of esophageal carcinoma in China. This study was to develop a staging model to predict outcomes of patients with ESCC. Using Cox regression analysis, principal component analysis (PCA), partitioning clustering, Kaplan-Meier analysis, receiver operating characteristic (ROC) curve analysis, and classification and regression tree (CART) analysis, we mined the Gene Expression Omnibus database to determine the expression profiles of genes in 179 patients with ESCC from GSE63624 and GSE63622 dataset. Univariate cox regression analysis of the GSE63624 dataset revealed that 2404 protein-coding genes (PCGs) and 635 long non-coding RNAs (lncRNAs) were associated with the survival of patients with ESCC. PCA categorized these PCGs and lncRNAs into three principal components (PCs), which were used to cluster the patients into three groups. ROC analysis demonstrated that the predictive ability of PCG-lncRNA PCs when applied to new patients was better than that of the tumor-node-metastasis staging (area under ROC curve [AUC]: 0.69 vs. 0.65, P < 0.05). Accordingly, we constructed a molecular disaggregated model comprising one lncRNA and two PCGs, which we designated as the LSB staging model using CART analysis in the GSE63624 dataset. This LSB staging model classified the GSE63622 dataset of patients into three different groups, and its effectiveness was validated by analysis of another cohort of 105 patients. The LSB staging model has clinical significance for the prognosis prediction of patients with ESCC and may serve as a three-gene staging microarray.
Time series analysis of collective motions in proteins
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.
2004-01-01
The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.
Cocco, S; Monasson, R; Sessak, V
2011-05-01
We consider the problem of inferring the interactions between a set of N binary variables from the knowledge of their frequencies and pairwise correlations. The inference framework is based on the Hopfield model, a special case of the Ising model where the interaction matrix is defined through a set of patterns in the variable space, and is of rank much smaller than N. We show that maximum likelihood inference is deeply related to principal component analysis when the amplitude of the pattern components ξ is negligible compared to √N. Using techniques from statistical mechanics, we calculate the corrections to the patterns to the first order in ξ/√N. We stress the need to generalize the Hopfield model and include both attractive and repulsive patterns in order to correctly infer networks with sparse and strong interactions. We present a simple geometrical criterion to decide how many attractive and repulsive patterns should be considered as a function of the sampling noise. We moreover discuss how many sampled configurations are required for a good inference, as a function of the system size N and of the amplitude ξ. The inference approach is illustrated on synthetic and biological data.
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
A verification procedure for MSC/NASTRAN Finite Element Models
NASA Technical Reports Server (NTRS)
Stockwell, Alan E.
1995-01-01
Finite Element Models (FEM's) are used in the design and analysis of aircraft to mathematically describe the airframe structure for such diverse tasks as flutter analysis and actively controlled landing gear design. FEM's are used to model the entire airplane as well as airframe components. The purpose of this document is to describe recommended methods for verifying the quality of the FEM's and to specify a step-by-step procedure for implementing the methods.
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
Kałka, Andrzej J; Turek, Andrzej M
2018-04-03
'White' and 'grey' methods of data modeling have been employed to resolve the heterogeneous fluorescence from a fluorophore mixture of 9-cyanoanthracene (CNA), 10-chloro-9-cyanoanthracene (ClCNA) and 9,10-dicyanoanthracene (DCNA) into component individual fluorescence spectra. The three-component spectra of fluorescence quenching in methanol were recorded for increasing amounts of lithium bromide used as a quencher. The associated intensity decay profiles of differentially quenched fluorescence of single components were modeled on the basis of a linear Stern-Volmer plot. These profiles are necessary to initiate the fitting procedure in both 'white' and 'grey' modeling of the original data matrices. 'White' methods of data modeling, called also 'hard' methods, are based on chemical/physical laws expressed in terms of some well-known or generally accepted mathematical equations. The parameters of these models are not known and they are estimated by least squares curve fitting. 'Grey' approaches to data modeling, also known as hard-soft modeling techniques, make use of both hard-model and soft-model parts. In practice, the difference between 'white' and 'grey' methods lies in the way in which the 'crude' fluorescence intensity decays of the mixture components are estimated. In the former case they are given in a functional form while in the latter as digitized curves which, in general, can only be obtained by using dedicated techniques of factor analysis. In the paper, the initial values of the Stern-Volmer constants of pure components were evaluated by both 'point-by-point' and 'matrix' versions of the method making use of the concept of wavelength dependent intensity fractions as well as by the rank annihilation factor analysis applied to the data matrices of the difference fluorescence spectra constructed in two ways: from the spectra recorded for a few excitation lines at the same concentration of a fluorescence quencher or classically from a series of the spectra measured for one selected excitation line but for increasing concentration of the quencher. The results of multiple curve resolution obtained by all types of the applied methods have been scrutinized and compared. In addition, the effect of inadequacy of sample preparation and increasing instrumental noise on the shape of the resolved spectral profiles has been studied on several datasets mimicking the measured data matrices. Graphical Abstract ᅟ.
The Analysis of Three-Way Contingency Tables by Three-Mode Association Models.
ERIC Educational Resources Information Center
Anderson, Carolyn J.
1996-01-01
Generalizations of L. A. Goodman's RC(M) association model (1991 and earlier) are presented for three-way tables. These three-mode association models use L. R. Tucker's three-mode components model (1964, 1966) to represent the three-factor interaction or the combined effects of two- and three-factor interactions. (SLD)
Improved analyses using function datasets and statistical modeling
John S. Hogland; Nathaniel M. Anderson
2014-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...
Perceptions of the Students toward Studio Physics
ERIC Educational Resources Information Center
Gok, Tolga
2011-01-01
The purpose of this study was not only to report the development process of the studio model, but also to determine the students' perceptions about the studio model. This model retains the large lecture component but combines recitation and laboratory instruction into studio model. This research was based on qualitative analysis. The data of the…
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work on which is under way.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
NASA Technical Reports Server (NTRS)
DiStefano, III, Frank James (Inventor); Wobick, Craig A. (Inventor); Chapman, Kirt Auldwin (Inventor); McCloud, Peter L. (Inventor)
2014-01-01
A thermal fluid system modeler including a plurality of individual components. A solution vector is configured and ordered as a function of one or more inlet dependencies of the plurality of individual components. A fluid flow simulator simulates thermal energy being communicated with the flowing fluid and between first and second components of the plurality of individual components. The simulation extends from an initial time to a later time step and bounds heat transfer to be substantially between the flowing fluid, walls of tubes formed in each of the individual components of the plurality, and between adjacent tubes. Component parameters of the solution vector are updated with simulation results for each of the plurality of individual components of the simulation.
A Conceptual Framework for Analysis of Communication in Rural Social Systems.
ERIC Educational Resources Information Center
Axinn, George H.
This paper describes a five-component system with ten major internal linkages which may be used as a model for studying information flow in any rural agricultural social system. The major components are production, supply, marketing, research, and extension education. In addition, definitions are offered of the crucial variables affecting…
Solid Modeling of Crew Exploration Vehicle Structure Concepts for Mass Optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2006-01-01
Parametric solid and surface models of the crew exploration vehicle (CEV) command module (CM) structure concepts are developed for rapid finite element analyses, structural sizing and estimation of optimal structural mass. The effects of the structural configuration and critical design parameters on the stress distribution are visualized, examined to arrive at an efficient design. The CM structural components consisted of the outer heat shield, inner pressurized crew cabin, ring bulkhead and spars. For this study only the internal cabin pressure load case is considered. Component stress, deflection, margins of safety and mass are used as design goodness criteria. The design scenario is explored by changing the component thickness parameters and materials until an acceptable design is achieved. Aluminum alloy, titanium alloy and an advanced composite material properties are considered for the stress analysis and the results are compared as a part of lessons learned and to build up a structural component sizing knowledge base for the future CEV technology support. This independent structural analysis and the design scenario based optimization process may also facilitate better CM structural definition and rapid prototyping.
The Decadal Climate Prediction Project (DCPP) contribution to CMIP6
Boer, George J.; Smith, Douglas M.; Cassou, Christophe; ...
2016-01-01
The Decadal Climate Prediction Project (DCPP) is a coordinated multi-model investigation into decadal climate prediction, predictability, and variability. The DCPP makes use of past experience in simulating and predicting decadal variability and forced climate change gained from the fifth Coupled Model Intercomparison Project (CMIP5) and elsewhere. It builds on recent improvements in models, in the reanalysis of climate data, in methods of initialization and ensemble generation, and in data treatment and analysis to propose an extended comprehensive decadal prediction investigation as a contribution to CMIP6 (Eyring et al., 2016) and to the WCRP Grand Challenge on Near Term Climate Predictionmore » (Kushnir et al., 2016). The DCPP consists of three components. Component A comprises the production and analysis of an extensive archive of retrospective forecasts to be used to assess and understand historical decadal prediction skill, as a basis for improvements in all aspects of end-to-end decadal prediction, and as a basis for forecasting on annual to decadal timescales. Component B undertakes ongoing production, analysis and dissemination of experimental quasi-real-time multi-model forecasts as a basis for potential operational forecast production. Component C involves the organization and coordination of case studies of particular climate shifts and variations, both natural and naturally forced (e.g. the “hiatus”, volcanoes), including the study of the mechanisms that determine these behaviours. Furthermore, groups are invited to participate in as many or as few of the components of the DCPP, each of which are separately prioritized, as are of interest to them.The Decadal Climate Prediction Project addresses a range of scientific issues involving the ability of the climate system to be predicted on annual to decadal timescales, the skill that is currently and potentially available, the mechanisms involved in long timescale variability, and the production of forecasts of benefit to both science and society.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boer, George J.; Smith, Douglas M.; Cassou, Christophe
The Decadal Climate Prediction Project (DCPP) is a coordinated multi-model investigation into decadal climate prediction, predictability, and variability. The DCPP makes use of past experience in simulating and predicting decadal variability and forced climate change gained from the fifth Coupled Model Intercomparison Project (CMIP5) and elsewhere. It builds on recent improvements in models, in the reanalysis of climate data, in methods of initialization and ensemble generation, and in data treatment and analysis to propose an extended comprehensive decadal prediction investigation as a contribution to CMIP6 (Eyring et al., 2016) and to the WCRP Grand Challenge on Near Term Climate Predictionmore » (Kushnir et al., 2016). The DCPP consists of three components. Component A comprises the production and analysis of an extensive archive of retrospective forecasts to be used to assess and understand historical decadal prediction skill, as a basis for improvements in all aspects of end-to-end decadal prediction, and as a basis for forecasting on annual to decadal timescales. Component B undertakes ongoing production, analysis and dissemination of experimental quasi-real-time multi-model forecasts as a basis for potential operational forecast production. Component C involves the organization and coordination of case studies of particular climate shifts and variations, both natural and naturally forced (e.g. the “hiatus”, volcanoes), including the study of the mechanisms that determine these behaviours. Furthermore, groups are invited to participate in as many or as few of the components of the DCPP, each of which are separately prioritized, as are of interest to them.The Decadal Climate Prediction Project addresses a range of scientific issues involving the ability of the climate system to be predicted on annual to decadal timescales, the skill that is currently and potentially available, the mechanisms involved in long timescale variability, and the production of forecasts of benefit to both science and society.« less
NASA Astrophysics Data System (ADS)
Darma Tarigan, Suria
2016-01-01
Flooding is caused by excessive rainfall flowing downstream as cumulative surface runoff. Flooding event is a result of complex interaction of natural system components such as rainfall events, land use, soil, topography and channel characteristics. Modeling flooding event as a result of interaction of those components is a central theme in watershed management. The model is usually used to test performance of various management practices in flood mitigation. There are various types of management practices for flood mitigation including vegetative and structural management practices. Existing hydrological model such as SWAT and HEC-HMS models have limitation to accommodate discrete management practices such as infiltration well, small farm reservoir, silt pits in its analysis due to the lumped structure of these models. Aim of this research is to use raster spatial analysis functions of Geo-Information System (RGIS-HM) to model flooding event in Ciliwung watershed and to simulate impact of discrete management practices on surface runoff reduction. The model was validated using flooding data event of Ciliwung watershed on 29 January 2004. The hourly hydrograph data and rainfall data were available during period of model validation. The model validation provided good result with Nash-Suthcliff efficiency of 0.8. We also compared the RGIS-HM with Netlogo Hydrological Model (NL-HM). The RGIS-HM has similar capability with NL-HM in simulating discrete management practices in watershed scale.
Absolute wind velocities in the lower thermosphere of Venus using infrared heterodyne spectroscopy
NASA Technical Reports Server (NTRS)
Goldstein, Jeffrey J.; Mumma, Michael J.; Kostiuk, Theodor; Deming, Drake; Espenak, Fred; Zipoy, David
1991-01-01
NASA's IR Telescope Facility and the McMath Solar Telescope have yielded absolute wind velocities in the Venus thermosphere for December 1985 to March 1987 with sufficient spatial resolution for circulation model discrimination. A qualitative analysis of beam-integrated winds indicates subsolar-to-antisolar circulation in the lower thermosphere; horizontal wind velocity was derived from a two-parameter model wind field of subsolar-antisolar and zonal components. A unique model fit common to all observing periods possessed 120 m/sec subsolar-antisolar and 25 m/sec zonal retrograde components, consistent with the Bougher et al. (1986, 1988) hydrodynamical models for 110 km.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Design of ceramic components with the NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.
Zhang, Xiaolei; Liu, Fei; He, Yong; Li, Xiaoli
2012-01-01
Hyperspectral imaging in the visible and near infrared (VIS-NIR) region was used to develop a novel method for discriminating different varieties of commodity maize seeds. Firstly, hyperspectral images of 330 samples of six varieties of maize seeds were acquired using a hyperspectral imaging system in the 380–1,030 nm wavelength range. Secondly, principal component analysis (PCA) and kernel principal component analysis (KPCA) were used to explore the internal structure of the spectral data. Thirdly, three optimal wavelengths (523, 579 and 863 nm) were selected by implementing PCA directly on each image. Then four textural variables including contrast, homogeneity, energy and correlation were extracted from gray level co-occurrence matrix (GLCM) of each monochromatic image based on the optimal wavelengths. Finally, several models for maize seeds identification were established by least squares-support vector machine (LS-SVM) and back propagation neural network (BPNN) using four different combinations of principal components (PCs), kernel principal components (KPCs) and textural features as input variables, respectively. The recognition accuracy achieved in the PCA-GLCM-LS-SVM model (98.89%) was the most satisfactory one. We conclude that hyperspectral imaging combined with texture analysis can be implemented for fast classification of different varieties of maize seeds. PMID:23235456
Development of a Predictive Corrosion Model Using Locality-Specific Corrosion Indices
2017-09-12
6 3.2.1 Statistical data analysis methods ...6 3.2.2 Algorithm development method ...components, and method ) were compiled into an executable program that uses mathematical models of materials degradation, and statistical calcula- tions
MDOT Pavement Management System : Prediction Models and Feedback System
DOT National Transportation Integrated Search
2000-10-01
As a primary component of a Pavement Management System (PMS), prediction models are crucial for one or more of the following analyses: : maintenance planning, budgeting, life-cycle analysis, multi-year optimization of maintenance works program, and a...
Marketing and Distribution: Better Learning Experiences through Proper Coordination.
ERIC Educational Resources Information Center
Coakley, Carroll B.
1979-01-01
Presents a cooperative education model that correlates the student's occupational objective with his/her training station. Components of the model discussed are (1) the task analysis, (2) the job description, (3) training plans, and (4) student evaluation. (LRA)
Wilhelmsen, Øivind; Bedeaux, Dick; Kjelstrup, Signe; Reguera, David
2014-01-14
Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which shows the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilhelmsen, Øivind, E-mail: oivind.wilhelmsen@ntnu.no; Bedeaux, Dick; Kjelstrup, Signe
Formation of nanosized droplets/bubbles from a metastable bulk phase is connected to many unresolved scientific questions. We analyze the properties and stability of multicomponent droplets and bubbles in the canonical ensemble, and compare with single-component systems. The bubbles/droplets are described on the mesoscopic level by square gradient theory. Furthermore, we compare the results to a capillary model which gives a macroscopic description. Remarkably, the solutions of the square gradient model, representing bubbles and droplets, are accurately reproduced by the capillary model except in the vicinity of the spinodals. The solutions of the square gradient model form closed loops, which showsmore » the inherent symmetry and connected nature of bubbles and droplets. A thermodynamic stability analysis is carried out, where the second variation of the square gradient description is compared to the eigenvalues of the Hessian matrix in the capillary description. The analysis shows that it is impossible to stabilize arbitrarily small bubbles or droplets in closed systems and gives insight into metastable regions close to the minimum bubble/droplet radii. Despite the large difference in complexity, the square gradient and the capillary model predict the same finite threshold sizes and very similar stability limits for bubbles and droplets, both for single-component and two-component systems.« less
[Discrimination of varieties of brake fluid using visual-near infrared spectra].
Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong
2008-06-01
A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.
Deepak Condenser Model (DeCoM)
NASA Technical Reports Server (NTRS)
Patel, Deepak
2013-01-01
Development of the DeCoM comes from the requirement of analyzing the performance of a condenser. A component of a loop heat pipe (LHP), the condenser, is interfaced with the radiator in order to reject heat. DeCoM simulates the condenser, with certain input parameters. Systems Improved Numerical Differencing Analyzer (SINDA), a thermal analysis software, calculates the adjoining component temperatures, based on the DeCoM parameters and interface temperatures to the radiator. Application of DeCoM is (at the time of this reporting) restricted to small-scale analysis, without the need for in-depth LHP component integrations. To efficiently develop a model to simulate the LHP condenser, DeCoM was developed to meet this purpose with least complexity. DeCoM is a single-condenser, single-pass simulator for analyzing its behavior. The analysis is done based on the interactions between condenser fluid, the wall, and the interface between the wall and the radiator. DeCoM is based on conservation of energy, two-phase equations, and flow equations. For two-phase, the Lockhart- Martinelli correlation has been used in order to calculate the convection value between fluid and wall. Software such as SINDA (for thermal analysis analysis) and Thermal Desktop (for modeling) are required. DeCoM also includes the ability to implement a condenser into a thermal model with the capability of understanding the code process and being edited to user-specific needs. DeCoM requires no license, and is an open-source code. Advantages to DeCoM include time dependency, reliability, and the ability for the user to view the code process and edit to their needs.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Integrated smart panel and support structure response
NASA Astrophysics Data System (ADS)
DeGiorgi, Virginia G.
1998-06-01
The performance of smart structures is a complex interaction between active and passive components. Active components, even when non-activated, can have an impact on structural performance and, conversely, structural characteristics of passive components can have a measurable impact on active component performance. The present work is an evaluation of the structural characteristics of an active panel designed for acoustic quieting. The support structure is included in the panel design as evaluated. Finite element methods are used to determine the active panel-support structure response. Two conditions are considered; a hollow unfilled support structure and the same structure filled with a polymer compound. Finite element models were defined so that stiffness values corresponding to the center of individual pistons could be determined. Superelement techniques were used to define mass and stiffness values representative of the combined active and support structure at the center of each piston. Results of interest obtained from the analysis include mode shapes, natural frequencies, and equivalent spring stuffiness for use in structural response models to represent the support structure. The effects on plate motion on piston performance cannot be obtained from this analysis, however mass and stiffness matrices for use in an integrated system model to determine piston head velocities can be obtained from this work.
NASA Astrophysics Data System (ADS)
Almurshedi, Ahmed; Ismail, Abd Khamim
2015-04-01
EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Gautam, Arvind; Callejas, Miguel A; Acharyya, Amit; Acharyya, Swati Ghosh
2018-05-01
This study introduced a shape memory alloy (SMA)-based smart knee spacer for total knee arthroplasty (TKA). Subsequently, a 3D CAD model of a smart tibial component of TKA was designed in Solidworks software, and verified using a finite element analysis in ANSYS Workbench. The two major properties of the SMA (NiTi), the pseudoelasticity (PE) and shape memory effect (SME), were exploited, modelled, and analysed for a TKA application. The effectiveness of the proposed model was verified in ANSYS Workbench through the finite element analysis (FEA) of the maximum deformation and equivalent (von Mises) stress distribution. The proposed model was also compared with a polymethylmethacrylate (PMMA)-based spacer for the upper portion of the tibial component for three subjects with body mass index (BMI) of 23.88, 31.09, and 38.39. The proposed SMA -based smart knee spacer contained 96.66978% less deformation with a standard deviation of 0.01738 than that of the corresponding PMMA based counterpart for the same load and flexion angle. Based on the maximum deformation analysis, the PMMA-based spacer had 30 times more permanent deformation than that of the proposed SMA-based spacer for the same load and flexion angle. The SME property of the lower portion of the tibial component for fixation of the spacer at its position was verified by an FEA in ANSYS. Wherein, a strain life-based fatigue analysis was performed and tested for the PE and SME built spacers through the FEA. Therefore, the SMA-based smart knee spacer eliminated the drawbacks of the PMMA-based spacer, including spacer fracture, loosening, dislocation, tilting or translation, and knee subluxation. Copyright © 2018. Published by Elsevier Ltd.
John S. Hogland; Nathaniel M. Anderson
2015-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that...
John S. Hogland; Nathaniel M. Anderson
2015-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...
NASA Technical Reports Server (NTRS)
Penny, Stephen G.; Akella, Santha; Buehner, Mark; Chevallier, Matthieu; Counillon, Francois; Draper, Clara; Frolov, Sergey; Fujii, Yosuke; Karspeck, Alicia; Kumar, Arun
2017-01-01
The purpose of this report is to identify fundamental issues for coupled data assimilation (CDA), such as gaps in science and limitations in forecasting systems, in order to provide guidance to the World Meteorological Organization (WMO) on how to facilitate more rapid progress internationally. Coupled Earth system modeling provides the opportunity to extend skillful atmospheric forecasts beyond the traditional two-week barrier by extracting skill from low-frequency state components such as the land, ocean, and sea ice. More generally, coupled models are needed to support seamless prediction systems that span timescales from weather, subseasonal to seasonal (S2S), multiyear, and decadal. Therefore, initialization methods are needed for coupled Earth system models, either applied to each individual component (called Weakly Coupled Data Assimilation - WCDA) or applied the coupled Earth system model as a whole (called Strongly Coupled Data Assimilation - SCDA). Using CDA, in which model forecasts and potentially the state estimation are performed jointly, each model domain benefits from observations in other domains either directly using error covariance information known at the time of the analysis (SCDA), or indirectly through flux interactions at the model boundaries (WCDA). Because the non-atmospheric domains are generally under-observed compared to the atmosphere, CDA provides a significant advantage over single-domain analyses. Next, we provide a synopsis of goals, challenges, and recommendations to advance CDA: Goals: (a) Extend predictive skill beyond the current capability of NWP (e.g. as demonstrated by improving forecast skill scores), (b) produce physically consistent initial conditions for coupled numerical prediction systems and reanalyses (including consistent fluxes at the domain interfaces), (c) make best use of existing observations by allowing observations from each domain to influence and improve the full earth system analysis, (d) develop a robust observation-based identification and understanding of mechanisms that determine the variability of weather and climate, (e) identify critical weaknesses in coupled models and the earth observing system, (f) generate full-field estimates of unobserved or sparsely observed variables, (g) improve the estimation of the external forcings causing changes to climate, (h) transition successes from idealized CDA experiments to real-world applications. Challenges: (a) Modeling at the interfaces between interacting components of coupled Earth system models may be inadequate for estimating uncertainty or error covariances between domains, (b) current data assimilation methods may be insufficient to simultaneously analyze domains containing multiple spatiotemporal scales of interest, (c) there is no standardization of observation data or their delivery systems across domains, (d) the size and complexity of many large-scale coupled Earth system models makes it is difficult to accurately represent uncertainty due to model parameters and coupling parameters, (e) model errors lead to local biases that can transfer between the different Earth system components and lead to coupled model biases and long-term model drift, (e) information propagation across model components with different spatiotemporal scales is extremely complicated, and must be improved in current coupled modeling frameworks, (h) there is insufficient knowledge on how to represent evolving errors in non-atmospheric model components (e.g. as sea ice, land and ocean) on the timescales of NWP.
Meta-analysis of learning design on sciences to develop a teacher’s professionalism training model
NASA Astrophysics Data System (ADS)
Alimah, S.; Anggraito, Y. U.; Prasetyo, A. P. B.; Saptono, S.
2018-03-01
This research explored a meta-analysis ofthe teaching design on sciences teachers’ lesson plans to develop the training model in achieving 21st-century learning competence and the implementation of the scientifically literate school model. This is a qualitative research with descriptively qualitative analysis. The sample was the members of sciences teacher’s organizations in Brebes Central Java Indonesia. Data was collected by documentation, observation, interviews, and questionnaires scale understanding. Analysis of the lesson plans focused on the correctness of development concept and integration of Strengthening Character Education; School Literacy Movement; Communication, Collaboration, Critical Thinking and Creativity; and Higher Order Thinking Skill. The sciences teachers had a good understanding of the components of the lesson plan, but needed further training. The integration of the character education by the teacher was not explicitly written into their lesson plan. The teachers’ skill to integrate the components was still needed improvements. It is found that training and mentoring of lesson plan development to improve the skills of science teachers in achieving 21st-century learning competencies are still urgent to be done. The training and mentoring model proposed here is Peretipe model, to help teachers skillfully design good lesson plans based on Technological Pedagogical, and Content Knowledge.
Source Analysis of the Crandall Canyon, Utah, Mine Collapse
Dreger, D. S.; Ford, S. R.; Walter, W. R.
2008-07-11
Analysis of seismograms from a magnitude 3.9 seismic event on August 6, 2007 in central Utah reveals an anomalous radiation pattern that is contrary to that expected for a tectonic earthquake, and which is dominated by an implosive component. The results show the seismic event is best modeled as a shallow underground collapse. Interestingly, large transverse surface waves require a smaller additional non-collapse source component that represents either faulting in the rocks above the mine workings or deformation of the medium surrounding the mine.
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed. PMID:965233
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Irshad; Gnedin, Nickolay Y.
Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less
Hao, Guozhu
2016-01-01
A water traffic system is a huge, nonlinear, complex system, and its stability is affected by various factors. Water traffic accidents can be considered to be a kind of mutation of a water traffic system caused by the coupling of multiple navigational environment factors. In this study, the catastrophe theory, principal component analysis (PCA), and multivariate statistics are integrated to establish a situation recognition model for a navigational environment with the aim of performing a quantitative analysis of the situation of this environment via the extraction and classification of its key influencing factors; in this model, the natural environment and traffic environment are considered to be two control variables. The Three Gorges Reservoir area of the Yangtze River is considered as an example, and six critical factors, i.e., the visibility, wind, current velocity, route intersection, channel dimension, and traffic flow, are classified into two principal components: the natural environment and traffic environment. These two components are assumed to have the greatest influence on the navigation risk. Then, the cusp catastrophe model is employed to identify the safety situation of the regional navigational environment in the Three Gorges Reservoir area. The simulation results indicate that the situation of the navigational environment of this area is gradually worsening from downstream to upstream. PMID:27391057
Jiang, Dan; Hao, Guozhu; Huang, Liwen; Zhang, Dan
2016-01-01
A water traffic system is a huge, nonlinear, complex system, and its stability is affected by various factors. Water traffic accidents can be considered to be a kind of mutation of a water traffic system caused by the coupling of multiple navigational environment factors. In this study, the catastrophe theory, principal component analysis (PCA), and multivariate statistics are integrated to establish a situation recognition model for a navigational environment with the aim of performing a quantitative analysis of the situation of this environment via the extraction and classification of its key influencing factors; in this model, the natural environment and traffic environment are considered to be two control variables. The Three Gorges Reservoir area of the Yangtze River is considered as an example, and six critical factors, i.e., the visibility, wind, current velocity, route intersection, channel dimension, and traffic flow, are classified into two principal components: the natural environment and traffic environment. These two components are assumed to have the greatest influence on the navigation risk. Then, the cusp catastrophe model is employed to identify the safety situation of the regional navigational environment in the Three Gorges Reservoir area. The simulation results indicate that the situation of the navigational environment of this area is gradually worsening from downstream to upstream.
Modelling of Damage Evolution in Braided Composites: Recent Developments
NASA Astrophysics Data System (ADS)
Wang, Chen; Roy, Anish; Silberschmidt, Vadim V.; Chen, Zhong
2017-12-01
Composites reinforced with woven or braided textiles exhibit high structural stability and excellent damage tolerance thanks to yarn interlacing. With their high stiffness-to-weight and strength-to-weight ratios, braided composites are attractive for aerospace and automotive components as well as sports protective equipment. In these potential applications, components are typically subjected to multi-directional static, impact and fatigue loadings. To enhance material analysis and design for such applications, understanding mechanical behaviour of braided composites and development of predictive capabilities becomes crucial. Significant progress has been made in recent years in development of new modelling techniques allowing elucidation of static and dynamic responses of braided composites. However, because of their unique interlacing geometric structure and complicated failure modes, prediction of damage initiation and its evolution in components is still a challenge. Therefore, a comprehensive literature analysis is presented in this work focused on a review of the state-of-the-art progressive damage analysis of braided composites with finite-element simulations. Recently models employed in the studies on mechanical behaviour, impact response and fatigue analyses of braided composites are presented systematically. This review highlights the importance, advantages and limitations of as-applied failure criteria and damage evolution laws for yarns and composite unit cells. In addition, this work provides a good reference for future research on FE simulations of braided composites.
Riccardi, M; Mele, G; Pulvento, C; Lavini, A; d'Andria, R; Jacobsen, S-E
2014-06-01
Leaf chlorophyll content provides valuable information about physiological status of plants; it is directly linked to photosynthetic potential and primary production. In vitro assessment by wet chemical extraction is the standard method for leaf chlorophyll determination. This measurement is expensive, laborious, and time consuming. Over the years alternative methods, rapid and non-destructive, have been explored. The aim of this work was to evaluate the applicability of a fast and non-invasive field method for estimation of chlorophyll content in quinoa and amaranth leaves based on RGB components analysis of digital images acquired with a standard SLR camera. Digital images of leaves from different genotypes of quinoa and amaranth were acquired directly in the field. Mean values of each RGB component were evaluated via image analysis software and correlated to leaf chlorophyll provided by standard laboratory procedure. Single and multiple regression models using RGB color components as independent variables have been tested and validated. The performance of the proposed method was compared to that of the widely used non-destructive SPAD method. Sensitivity of the best regression models for different genotypes of quinoa and amaranth was also checked. Color data acquisition of the leaves in the field with a digital camera was quick, more effective, and lower cost than SPAD. The proposed RGB models provided better correlation (highest R (2)) and prediction (lowest RMSEP) of the true value of foliar chlorophyll content and had a lower amount of noise in the whole range of chlorophyll studied compared with SPAD and other leaf image processing based models when applied to quinoa and amaranth.
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Hanley, James A
2008-01-01
Most survival analysis textbooks explain how the hazard ratio parameters in Cox's life table regression model are estimated. Fewer explain how the components of the nonparametric baseline survivor function are derived. Those that do often relegate the explanation to an "advanced" section and merely present the components as algebraic or iterative solutions to estimating equations. None comment on the structure of these estimators. This note brings out a heuristic representation that may help to de-mystify the structure.