Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Coupling of snow and permafrost processes using the Basic Modeling Interface (BMI)
NASA Astrophysics Data System (ADS)
Wang, K.; Overeem, I.; Jafarov, E. E.; Piper, M.; Stewart, S.; Clow, G. D.; Schaefer, K. M.
2017-12-01
We developed a permafrost modeling tool based by implementing the Kudryavtsev empirical permafrost active layer depth model (the so-called "Ku" component). The model is specifically set up to have a basic model interface (BMI), which enhances the potential coupling to other earth surface processes model components. This model is accessible through the Web Modeling Tool in Community Surface Dynamics Modeling System (CSDMS). The Kudryavtsev model has been applied for entire Alaska to model permafrost distribution at high spatial resolution and model predictions have been verified by Circumpolar Active Layer Monitoring (CALM) in-situ observations. The Ku component uses monthly meteorological forcing, including air temperature, snow depth, and snow density, and predicts active layer thickness (ALT) and temperature on the top of permafrost (TTOP), which are important factors in snow-hydrological processes. BMI provides an easy approach to couple the models with each other. Here, we provide a case of coupling the Ku component to snow process components, including the Snow-Degree-Day (SDD) method and Snow-Energy-Balance (SEB) method, which are existing components in the hydrological model TOPOFLOW. The work flow is (1) get variables from meteorology component, set the values to snow process component, and advance the snow process component, (2) get variables from meteorology and snow component, provide these to the Ku component and advance, (3) get variables from snow process component, set the values to meteorology component, and advance the meteorology component. The next phase is to couple the permafrost component with fully BMI-compliant TOPOFLOW hydrological model, which could provide a useful tool to investigate the permafrost hydrological effect.
Design of a component-based integrated environmental modeling framework
Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...
Component-specific modeling. [jet engine hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.
1992-01-01
Accomplishments are described for a 3 year program to develop methodology for component-specific modeling of aircraft hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models, (2) geometry model generators, (3) remeshing, (4) specialty three-dimensional inelastic structural analysis, (5) computationally efficient solvers, (6) adaptive solution strategies, (7) engine performance parameters/component response variables decomposition and synthesis, (8) integrated software architecture and development, and (9) validation cases for software developed.
Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars
2016-04-12
A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.
Mouse Models for Unraveling the Importance of Diet in Colon Cancer Prevention
Tammariello, Alexandra E.; Milner, John A.
2010-01-01
Diet and genetics are both considered important risk determinants for colorectal cancer, a leading cause of death worldwide. Several genetically engineered mouse models have been created, including the ApcMin mouse, to aid in the identification of key cancer related processes and to assist with the characterization of environmental factors, including the diet, which influence risk. Current research using these models provides evidence that several bioactive food components can inhibit genetically predisposed colorectal cancer, while others increase risk. Specifically, calorie restriction or increased exposure to n-3 fatty acids, sulforaphane, chafuroside, curcumin, and dibenzoylmethane were reported protective. Total fat, calories and all-trans retinoic acid are associated with an increased risk. Unraveling the importance of specific dietary components in these models is complicated by the basal diet used, the quantity of test components provided, and interactions among food components. Newer models are increasingly available to evaluate fundamental cellular processes, including DNA mismatch repair, immune function and inflammation as markers for colon cancer risk. Unfortunately, these models have been used infrequently to examine the influence of specific dietary components. The enhanced use of these models can shed mechanistic insights about the involvement of specific bioactive food and components and energy as determinants of colon cancer risk. However, the use of available mouse models to exactly represent processes important to human gastrointestinal cancers will remain a continued scientific challenge. PMID:20122631
Longley, Susan L; Watson, David; Noyes, Russell; Yoder, Kevin
2006-01-01
A dimensional and psychometrically informed taxonomy of anxiety is emerging, but the specific and nonspecific dimensions of panic and phobic anxiety require greater clarification. In this study, confirmatory factor analyses of data from a sample of 438 college students were used to validate a model of panic and phobic anxiety with six content factors; multiple scales from self-report measures were indicators of each model component. The model included a nonspecific component of (1) neuroticism and two specific components of panic attack, (2) physiological hyperarousal, and (3) anxiety sensitivity. The model also included three phobia components of (4) classically defined agoraphobia, (5) social phobia, and (6) blood-injection phobia. In these data, agoraphobia correlated more strongly with both the social phobia and blood phobia components than with either the physiological hyperarousal or the anxiety sensitivity components. These findings suggest that the association between panic attacks and agoraphobia warrants greater attention.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
Accomplishments are described for the second year effort of a 3-year program to develop methodology for component specific modeling of aircraft engine hot section components (turbine blades, turbine vanes, and burner liners). These accomplishments include: (1) engine thermodynamic and mission models; (2) geometry model generators; (3) remeshing; (4) specialty 3-D inelastic stuctural analysis; (5) computationally efficient solvers, (6) adaptive solution strategies; (7) engine performance parameters/component response variables decomposition and synthesis; (8) integrated software architecture and development, and (9) validation cases for software developed.
The Multiple Component Alternative for Gifted Education.
ERIC Educational Resources Information Center
Swassing, Ray
1984-01-01
The Multiple Component Model (MCM) of gifted education includes instruction which may overlap in literature, history, art, enrichment, languages, science, physics, math, music, and dance. The model rests on multifactored identification and requires systematic development and selection of components with ongoing feedback and evaluation. (CL)
Dynamics of Rotating Multi-component Turbomachinery Systems
NASA Technical Reports Server (NTRS)
Lawrence, Charles
1993-01-01
The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
A single factor underlies the metabolic syndrome: a confirmatory factor analysis.
Pladevall, Manel; Singal, Bonita; Williams, L Keoki; Brotons, Carlos; Guyer, Heidi; Sadurni, Josep; Falces, Carles; Serrano-Rios, Manuel; Gabriel, Rafael; Shaw, Jonathan E; Zimmet, Paul Z; Haffner, Steven
2006-01-01
Confirmatory factor analysis (CFA) was used to test the hypothesis that the components of the metabolic syndrome are manifestations of a single common factor. Three different datasets were used to test and validate the model. The Spanish and Mauritian studies included 207 men and 203 women and 1,411 men and 1,650 women, respectively. A third analytical dataset including 847 men was obtained from a previously published CFA of a U.S. population. The one-factor model included the metabolic syndrome core components (central obesity, insulin resistance, blood pressure, and lipid measurements). We also tested an expanded one-factor model that included uric acid and leptin levels. Finally, we used CFA to compare the goodness of fit of one-factor models with the fit of two previously published four-factor models. The simplest one-factor model showed the best goodness-of-fit indexes (comparative fit index 1, root mean-square error of approximation 0.00). Comparisons of one-factor with four-factor models in the three datasets favored the one-factor model structure. The selection of variables to represent the different metabolic syndrome components and model specification explained why previous exploratory and confirmatory factor analysis, respectively, failed to identify a single factor for the metabolic syndrome. These analyses support the current clinical definition of the metabolic syndrome, as well as the existence of a single factor that links all of the core components.
Multivariate Analysis of Seismic Field Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen
1999-06-01
This report includes the details of the model building procedure and prediction of seismic field data. Principal Components Regression, a multivariate analysis technique, was used to model seismic data collected as two pieces of equipment were cycled on and off. Models built that included only the two pieces of equipment of interest had trouble predicting data containing signals not included in the model. Evidence for poor predictions came from the prediction curves as well as spectral F-ratio plots. Once the extraneous signals were included in the model, predictions improved dramatically. While Principal Components Regression performed well for the present datamore » sets, the present data analysis suggests further work will be needed to develop more robust modeling methods as the data become more complex.« less
NASTRAN Modeling of Flight Test Components for UH-60A Airloads Program Test Configuration
NASA Technical Reports Server (NTRS)
Idosor, Florentino R.; Seible, Frieder
1993-01-01
Based upon the recommendations of the UH-60A Airloads Program Review Committee, work towards a NASTRAN remodeling effort has been conducted. This effort modeled and added the necessary structural/mass components to the existing UH-60A baseline NASTRAN model to reflect the addition of flight test components currently in place on the UH-60A Airloads Program Test Configuration used in NASA-Ames Research Center's Modern Technology Rotor Airloads Program. These components include necessary flight hardware such as instrument booms, movable ballast cart, equipment mounting racks, etc. Recent modeling revisions have also been included in the analyses to reflect the inclusion of new and updated primary and secondary structural components (i.e., tail rotor shaft service cover, tail rotor pylon) and improvements to the existing finite element mesh (i.e., revisions of material property estimates). Mode frequency and shape results have shown that components such as the Trimmable Ballast System baseplate and its respective payload ballast have caused a significant frequency change in a limited number of modes while only small percent changes in mode frequency are brought about with the addition of the other MTRAP flight components. With the addition of the MTRAP flight components, update of the primary and secondary structural model, and imposition of the final MTRAP weight distribution, modal results are computed representative of the 'best' model presently available.
Collisional-radiative model including recombination processes for W27+ ion★
NASA Astrophysics Data System (ADS)
Murakami, Izumi; Sasaki, Akira; Kato, Daiji; Koike, Fumihiro
2017-10-01
We have constructed a collisional-radiative (CR) model for W27+ ions including 226 configurations with n ≤ 9 and ł ≤ 5 for spectroscopic diagnostics. We newly include recombination processes in the model and this is the first result of extreme ultraviolet spectrum calculated for recombining plasma component. Calculated spectra in 40-70 Å range in ionizing and recombining plasma components show similar 3 strong lines and 1 line weak in recombining plasma component at 45-50 Å and many weak lines at 50-65 Å for both components. Recombination processes do not contribute much to the spectrum at around 60 Å for W27+ ion. Dielectronic satellite lines are also minor contribution to the spectrum of recombining plasma component. Dielectronic recombination (DR) rate coefficient from W28+ to W27+ ions is also calculated with the same atomic data in the CR model. We found that larger set of energy levels including many autoionizing states gave larger DR rate coefficients but our rate agree within factor 6 with other works at electron temperature around 1 keV in which W27+ and W28+ ions are usually observed in plasmas. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, and Grzegorz Karwasz.
System Testing of Ground Cooling System Components
NASA Technical Reports Server (NTRS)
Ensey, Tyler Steven
2014-01-01
This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
NASA Astrophysics Data System (ADS)
Xu, S.; Uneri, A.; Khanna, A. Jay; Siewerdsen, J. H.; Stayman, J. W.
2017-04-01
Metal artifacts can cause substantial image quality issues in computed tomography. This is particularly true in interventional imaging where surgical tools or metal implants are in the field-of-view. Moreover, the region-of-interest is often near such devices which is exactly where image quality degradations are largest. Previous work on known-component reconstruction (KCR) has shown the incorporation of a physical model (e.g. shape, material composition, etc) of the metal component into the reconstruction algorithm can significantly reduce artifacts even near the edge of a metal component. However, for such approaches to be effective, they must have an accurate model of the component that include energy-dependent properties of both the metal device and the CT scanner, placing a burden on system characterization and component material knowledge. In this work, we propose a modified KCR approach that adopts a mixed forward model with a polyenergetic model for the component and a monoenergetic model for the background anatomy. This new approach called Poly-KCR jointly estimates a spectral transfer function associated with known components in addition to the background attenuation values. Thus, this approach eliminates both the need to know component material composition a prior as well as the requirement for an energy-dependent characterization of the CT scanner. We demonstrate the efficacy of this novel approach and illustrate its improved performance over traditional and model-based iterative reconstruction methods in both simulation studies and in physical data including an implanted cadaver sample.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Impeller leakage flow modeling for mechanical vibration control
NASA Technical Reports Server (NTRS)
Palazzolo, Alan B.
1996-01-01
HPOTP and HPFTP vibration test results have exhibited transient and steady characteristics which may be due to impeller leakage path (ILP) related forces. For example, an axial shift in the rotor could suddenly change the ILP clearances and lengths yielding dynamic coefficient and subsequent vibration changes. ILP models are more complicated than conventional-single component-annular seal models due to their radial flow component (coriolis and centrifugal acceleration), complex geometry (axial/radial clearance coupling), internal boundary (transition) flow conditions between mechanical components along the ILP and longer length, requiring moment as well as force coefficients. Flow coupling between mechanical components results from mass and energy conservation applied at their interfaces. Typical components along the ILP include an inlet seal, curved shroud, and an exit seal, which may be a stepped labyrinth type. Von Pragenau (MSFC) has modeled labyrinth seals as a series of plain annular seals for leakage and dynamic coefficient prediction. These multi-tooth components increase the total number of 'flow coupled' components in the ILP. Childs developed an analysis for an ILP consisting of a single, constant clearance shroud with an exit seal represented by a lumped flow-loss coefficient. This same geometry was later extended to include compressible flow. The objective of the current work is to: supply ILP leakage-force impedance-dynamic coefficient modeling software to MSFC engineers, base on incompressible/compressible bulk flow theory; design the software to model a generic geometry ILP described by a series of components lying along an arbitrarily directed path; validate the software by comparison to available test data, CFD and bulk models; and develop a hybrid CFD-bulk flow model of an ILP to improve modeling accuracy within practical run time constraints.
Transport of Solar Wind Fluctuations: A Two-Component Model
NASA Technical Reports Server (NTRS)
Oughton, S.; Matthaeus, W. H.; Smith, C. W.; Breech, B.; Isenberg, P. A.
2011-01-01
We present a new model for the transport of solar wind fluctuations which treats them as two interacting incompressible components: quasi-two-dimensional turbulence and a wave-like piece. Quantities solved for include the energy, cross helicity, and characteristic transverse length scale of each component, plus the proton temperature. The development of the model is outlined and numerical solutions are compared with spacecraft observations. Compared to previous single-component models, this new model incorporates a more physically realistic treatment of fluctuations induced by pickup ions and yields improved agreement with observed values of the correlation length, while maintaining good observational accord with the energy, cross helicity, and temperature.
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
School Nurse Summer Institute: A Model for Professional Development
ERIC Educational Resources Information Center
Neighbors, Marianne; Barta, Kathleen
2004-01-01
The components of a professional development model designed to empower school nurses to become leaders in school health services is described. The model was implemented during a 3-day professional development institute that included clinical and leadership components, especially coalition building, with two follow-up sessions in the fall and…
Thermal cut-off response modelling of universal motors
NASA Astrophysics Data System (ADS)
Thangaveloo, Kashveen; Chin, Yung Shin
2017-04-01
This paper presents a model to predict the thermal cut-off (TCO) response behaviour in universal motors. The mathematical model includes the calculations of heat loss in the universal motor and the flow characteristics around the TCO component which together are the main parameters for TCO response prediction. In order to accurately predict the TCO component temperature, factors like the TCO component resistance, the effect of ambient, and the flow conditions through the motor are taken into account to improve the prediction accuracy of the model.
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
Nambe Pueblo Water Budget and Forecasting model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brainard, James Robert
2009-10-01
This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Watermore » Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.« less
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2011-01-01
The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.
Finite Element Models and Properties of a Stiffened Floor-Equipped Composite Cylinder
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.
2010-01-01
Finite element models were developed of a floor-equipped, frame and stringer stiffened composite cylinder including a coarse finite element model of the structural components, a coarse finite element model of the acoustic cavities above and below the beam-supported plywood floor, and two dense models consisting of only the structural components. The report summarizes the geometry, the element properties, the material and mechanical properties, the beam cross-section characteristics, the beam element representations and the boundary conditions of the composite cylinder models. The expressions used to calculate the group speeds for the cylinder components are presented.
Regression Models for Identifying Noise Sources in Magnetic Resonance Images
Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.
2009-01-01
Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478
This article describes the governing equations, computational algorithms, and other components entering into the Community Multiscale Air Quality (CMAQ) modeling system. This system has been designed to approach air quality as a whole by including state-of-the-science capabiliti...
CICE, The Los Alamos Sea Ice Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth; Lipscomb, William; Jones, Philip
The Los Alamos sea ice model (CICE) is the result of an effort to develop a computationally efficient sea ice component for a fully coupled atmosphere–land–ocean–ice global climate model. It was originally designed to be compatible with the Parallel Ocean Program (POP), an ocean circulation model developed at Los Alamos National Laboratory for use on massively parallel computers. CICE has several interacting components: a vertical thermodynamic model that computes local growth rates of snow and ice due to vertical conductive, radiative and turbulent fluxes, along with snowfall; an elastic-viscous-plastic model of ice dynamics, which predicts the velocity field of themore » ice pack based on a model of the material strength of the ice; an incremental remapping transport model that describes horizontal advection of the areal concentration, ice and snow volume and other state variables; and a ridging parameterization that transfers ice among thickness categories based on energetic balances and rates of strain. It also includes a biogeochemical model that describes evolution of the ice ecosystem. The CICE sea ice model is used for climate research as one component of complex global earth system models that include atmosphere, land, ocean and biogeochemistry components. It is also used for operational sea ice forecasting in the polar regions and in numerical weather prediction models.« less
NASA Astrophysics Data System (ADS)
Kirkegaard, Casper; Foged, Nikolaj; Auken, Esben; Christiansen, Anders Vest; Sørensen, Kurt
2012-09-01
Helicopter borne time domain EM systems historically measure only the Z-component of the secondary field, whereas fixed wing systems often measure all field components. For the latter systems the X-component is often used to map discrete conductors, whereas it finds little use in the mapping of layered settings. Measuring the horizontal X-component with an offset loop helicopter system probes the earth with a complementary sensitivity function that is very different from that of the Z-component, and could potentially be used for improving resolution of layered structures in one dimensional modeling. This area is largely unexplored in terms of quantitative results in the literature, since measuring and inverting X-component data from a helicopter system is not straightforward: The signal strength is low, the noise level is high, the signal is very sensitive to the instrument pitch and the sensitivity function also has a complex lateral behavior. The basis of our study is a state of the art inversion scheme, using a local 1D forward model description, in combination with experiences gathered from extending the SkyTEM system to measure the X component. By means of a 1D sensitivity analysis we motivate that in principle resolution of layered structures can be improved by including an X-component signal in a 1D inversion, given the prerequisite that a low-pass filter of suitably low cut-off frequency can be employed. In presenting our practical experiences with modifying the SkyTEM system we discuss why this prerequisite unfortunately can be very difficult to fulfill in practice. Having discussed instrumental limitations we show what can be obtained in practice using actual field data. Here, we demonstrate how the issue of high sensitivity towards instrument pitch can be overcome by including the pitch angle as an inversion parameter and how joint inversion of the Z- and X-components produces virtually the same model result as for the Z-component alone. We conclude that adding helicopter system X-component to a 1D inversion can be used to facilitate higher confidence in the layered result, as the requirements for fitting the data into a 1D model envelope becomes more stringent and the model result thus less prone to misinterpretation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, William Scott
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
Accurate and efficient modeling of the detector response in small animal multi-head PET systems.
Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto
2013-10-07
In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.
Accurate and efficient modeling of the detector response in small animal multi-head PET systems
NASA Astrophysics Data System (ADS)
Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto
2013-10-01
In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.
Xian, George Z.; Homer, Collin G.; Rigge, Matthew B.; Shi, Hua; Meyer, Debbie
2015-01-01
Accurate and consistent estimates of shrubland ecosystem components are crucial to a better understanding of ecosystem conditions in arid and semiarid lands. An innovative approach was developed by integrating multiple sources of information to quantify shrubland components as continuous field products within the National Land Cover Database (NLCD). The approach consists of several procedures including field sample collections, high-resolution mapping of shrubland components using WorldView-2 imagery and regression tree models, Landsat 8 radiometric balancing and phenological mosaicking, medium resolution estimates of shrubland components following different climate zones using Landsat 8 phenological mosaics and regression tree models, and product validation. Fractional covers of nine shrubland components were estimated: annual herbaceous, bare ground, big sagebrush, herbaceous, litter, sagebrush, shrub, sagebrush height, and shrub height. Our study area included the footprint of six Landsat 8 scenes in the northwestern United States. Results show that most components have relatively significant correlations with validation data, have small normalized root mean square errors, and correspond well with expected ecological gradients. While some uncertainties remain with height estimates, the model formulated in this study provides a cross-validated, unbiased, and cost effective approach to quantify shrubland components at a regional scale and advances knowledge of horizontal and vertical variability of these components.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING
A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...
Tucci, Patrick; McKay, Robert M.
2006-01-01
The greatest limitation to the model is the lack of measured or estimated water-budget components for comparison to simulated water-budget components. Because the model is only calibrated to measured water levels, and not to water-budget components, the model results are nonunique. Other model limitations include the relatively coarse grid scale, lack of detailed information on pumpage from the quarry and from private developments and domestic wells, and the lack of separate water-level data for the Silurian- and Devonian-age rocks.
NASA Astrophysics Data System (ADS)
McIntyre, N.; Keir, G.
2014-12-01
Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
The solvent component of macromolecular crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weichenberger, Christian X.; Afonine, Pavel V.; Kantardjieff, Katherine
2015-04-30
On average, the mother liquor or solvent and its constituents occupy about 50% of a macromolecular crystal. Ordered as well as disordered solvent components need to be accurately accounted for in modelling and refinement, often with considerable complexity. The mother liquor from which a biomolecular crystal is grown will contain water, buffer molecules, native ligands and cofactors, crystallization precipitants and additives, various metal ions, and often small-molecule ligands or inhibitors. On average, about half the volume of a biomolecular crystal consists of this mother liquor, whose components form the disordered bulk solvent. Its scattering contributions can be exploited in initialmore » phasing and must be included in crystal structure refinement as a bulk-solvent model. Concomitantly, distinct electron density originating from ordered solvent components must be correctly identified and represented as part of the atomic crystal structure model. Herein, are reviewed (i) probabilistic bulk-solvent content estimates, (ii) the use of bulk-solvent density modification in phase improvement, (iii) bulk-solvent models and refinement of bulk-solvent contributions and (iv) modelling and validation of ordered solvent constituents. A brief summary is provided of current tools for bulk-solvent analysis and refinement, as well as of modelling, refinement and analysis of ordered solvent components, including small-molecule ligands.« less
ERIC Educational Resources Information Center
Busseri, Michael; Sadava, Stanley; DeCourville, Nancy
2007-01-01
The primary components of subjective well-being (SWB) include life satisfaction (LS), positive affect (PA), and negative affect (NA). There is little consensus, however, concerning how these components form a model of SWB. In this paper, six longitudinal studies varying in demographic characteristics, length of time between assessment periods,…
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.
Modelling robot construction systems
NASA Technical Reports Server (NTRS)
Grasso, Chris
1990-01-01
TROTER's are small, inexpensive robots that can work together to accomplish sophisticated construction tasks. To understand the issues involved in designing and operating a team of TROTER's, the robots and their components are being modeled. A TROTER system that features standardized component behavior is introduced. An object-oriented model implemented in the Smalltalk programming language is described and the advantages of the object-oriented approach for simulating robot and component interactions are discussed. The presentation includes preliminary results and a discussion of outstanding issues.
A Career and Learning Transitional Model for Those Experiencing Labour Market Disadvantage
ERIC Educational Resources Information Center
Cameron, Roslyn
2009-01-01
Research investigating the learning and career transitions of those disadvantaged in the labour market has resulted in the development of a four-component model to enable disadvantaged groups to navigate learning and career transitions. The four components of the model include: the self-concept; learning and recognition; career and life planning;…
Model reduction by weighted Component Cost Analysis
NASA Technical Reports Server (NTRS)
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
bioWidgets: data interaction components for genomics.
Fischer, S; Crabtree, J; Brunk, B; Gibson, M; Overton, G C
1999-10-01
The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.
An Integrated High Resolution Hydrometeorological Modeling Testbed using LIS and WRF
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Peters-Lidard, Christa D.; Eastman, Joseph L.; Tao, Wei-Kuo
2007-01-01
Scientists have made great strides in modeling physical processes that represent various weather and climate phenomena. Many modeling systems that represent the major earth system components (the atmosphere, land surface, and ocean) have been developed over the years. However, developing advanced Earth system applications that integrates these independently developed modeling systems have remained a daunting task due to limitations in computer hardware and software. Recently, efforts such as the Earth System Modeling Ramework (ESMF) and Assistance for Land Modeling Activities (ALMA) have focused on developing standards, guidelines, and computational support for coupling earth system model components. In this article, the development of a coupled land-atmosphere hydrometeorological modeling system that adopts these community interoperability standards, is described. The land component is represented by the Land Information System (LIS), developed by scientists at the NASA Goddard Space Flight Center. The Weather Research and Forecasting (WRF) model, a mesoscale numerical weather prediction system, is used as the atmospheric component. LIS includes several community land surface models that can be executed at spatial scales as fine as 1km. The data management capabilities in LIS enable the direct use of high resolution satellite and observation data for modeling. Similarly, WRF includes several parameterizations and schemes for modeling radiation, microphysics, PBL and other processes. Thus the integrated LIS-WRF system facilitates several multi-model studies of land-atmosphere coupling that can be used to advance earth system studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
James A. Powell; Barbara J. Bentz
2014-01-01
For species with irruptive population behavior, dispersal is an important component of outbreak dynamics. We developed and parameterized a mechanistic model describing mountain pine beetle (Dendroctonus ponderosae Hopkins) population demographics and dispersal across a landscape. Model components include temperature-dependent phenology, host tree colonization...
Analyzing the Impact of a Data Analysis Process to Improve Instruction Using a Collaborative Model
ERIC Educational Resources Information Center
Good, Rebecca B.
2006-01-01
The Data Collaborative Model (DCM) assembles assessment literacy, reflective practices, and professional development into a four-component process. The sub-components include assessing students, reflecting over data, professional dialogue, professional development for the teachers, interventions for students based on data results, and re-assessing…
Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)
1999-01-01
Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.
ERIC Educational Resources Information Center
Borders, L. DiAnne
2012-01-01
Models that meet the Psychology Board of Australia's definition of peer consultation include dyadic, triadic, and group formats. Components of these models (e.g., goals, theoretical basis, role of leader, members' roles, structure, and steps in procedure, stages in group development) are presented, and evidence of their effectiveness is reviewed.…
Tervo, Christopher J.; Reed, Jennifer L.
2013-01-01
The success of genome-scale metabolic modeling is contingent on a model's ability to accurately predict growth and metabolic behaviors. To date, little focus has been directed towards developing systematic methods of proposing, modifying and interrogating an organism's biomass requirements that are used in constraint-based models. To address this gap, the biomass modification and generation (BioMog) framework was created and used to generate lists of biomass components de novo, as well as to modify predefined biomass component lists, for models of Escherichia coli (iJO1366) and of Shewanella oneidensis (iSO783) from high-throughput growth phenotype and fitness datasets. BioMog's de novo biomass component lists included, either implicitly or explicitly, up to seventy percent of the components included in the predefined biomass equations, and the resulting de novo biomass equations outperformed the predefined biomass equations at qualitatively predicting mutant growth phenotypes by up to five percent. Additionally, the BioMog procedure can quantify how many experiments support or refute a particular metabolite's essentiality to a cell, and it facilitates the determination of inconsistent experiments and inaccurate reaction and/or gene to reaction associations. To further interrogate metabolite essentiality, the BioMog framework includes an experiment generation algorithm that allows for the design of experiments to test whether a metabolite is essential. Using BioMog, we correct experimental results relating to the essentiality of thyA gene in E. coli, as well as perform knockout experiments supporting the essentiality of protoheme. With these capabilities, BioMog can be a valuable resource for analyzing growth phenotyping data and component of a model developer's toolbox. PMID:24339916
Monitoring Wind Turbine Loading Using Power Converter Signals
NASA Astrophysics Data System (ADS)
Rieg, C. A.; Smith, C. J.; Crabtree, C. J.
2016-09-01
The ability to detect faults and predict loads on a wind turbine drivetrain's mechanical components cost-effectively is critical to making the cost of wind energy competitive. In order to investigate whether this is possible using the readily available power converter current signals, an existing permanent magnet synchronous generator based wind energy conversion system computer model was modified to include a grid-side converter (GSC) for an improved converter model and a gearbox. The GSC maintains a constant DC link voltage via vector control. The gearbox was modelled as a 3-mass model to allow faults to be included. Gusts and gearbox faults were introduced to investigate the ability of the machine side converter (MSC) current (I q) to detect and quantify loads on the mechanical components. In this model, gearbox faults were not detectable in the I q signal due to shaft stiffness and damping interaction. However, a model that predicts the load change on mechanical wind turbine components using I q was developed and verified using synthetic and real wind data.
NASA Technical Reports Server (NTRS)
Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.
2011-01-01
A rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for a curved orthogrid panel typical of launch vehicle skin structures. Several test article configurations were produced by adding component equipment of differing weights to the flight-like vehicle panel. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was employed to describe the assumed correlation of phased input sound pressures across the energized panel. This application demonstrates the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software modules developed for the RPTF method can be easily adapted for quick replacement of the diffuse acoustic field with other pressure field models; for example a turbulent boundary layer (TBL) model suitable for vehicle ascent. Wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this type of environment. Finally, component vibration environments for design were developed from the measured and predicted responses and compared with those derived from traditional techniques such as Barrett scaling methods for unloaded and component-loaded panels.
from Colorado School of Mines. His research interests include optical modeling, computational fluid dynamics, and heat transfer. His work involves optical performance modeling of concentrating solar power experience includes developing thermal and optical models of CSP components at Norwich Solar Technologies
46 CFR 164.019-7 - Non-standard components; acceptance criteria and procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Inner Envelope Fabric; (iv) Closure (including zippers) or Adjustment Hardware; (v) Body Strap; (vi... in detail and including the unique style, part, or model number, the identification data required by the applicable subpart of this part, and any other manufacturer's identifying data. No two components...
46 CFR 164.019-7 - Non-standard components; acceptance criteria and procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Inner Envelope Fabric; (iv) Closure (including zippers) or Adjustment Hardware; (v) Body Strap; (vi... in detail and including the unique style, part, or model number, the identification data required by the applicable subpart of this part, and any other manufacturer's identifying data. No two components...
46 CFR 164.019-7 - Non-standard components; acceptance criteria and procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Inner Envelope Fabric; (iv) Closure (including zippers) or Adjustment Hardware; (v) Body Strap; (vi... in detail and including the unique style, part, or model number, the identification data required by the applicable subpart of this part, and any other manufacturer's identifying data. No two components...
NASA Astrophysics Data System (ADS)
Raffray, A. René; Federici, Gianfranco
1997-04-01
RACLETTE (Rate Analysis Code for pLasma Energy Transfer Transient Evaluation), a comprehensive but relatively simple and versatile model, was developed to help in the design analysis of plasma facing components (PFCs) under 'slow' high power transients, such as those associated with plasma vertical displacement events. The model includes all the key surface heat transfer processes such as evaporation, melting, and radiation, and their interaction with the PFC block thermal response and the coolant behaviour. This paper represents part I of two sister and complementary papers. It covers the model description, calibration and validation, and presents a number of parametric analyses shedding light on and identifying trends in the PFC armour block response to high plasma energy deposition transients. Parameters investigated include the plasma energy density and deposition time, the armour thickness and the presence of vapour shielding effects. Part II of the paper focuses on specific design analyses of ITER plasma facing components (divertor, limiter, primary first wall and baffle), including improvements in the thermal-hydraulic modeling required for better understanding the consequences of high energy deposition transients in particular for the ITER limiter case.
Computer-aided operations engineering with integrated models of systems and operations
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Ryan, Dan; Fleming, Land
1994-01-01
CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1991-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.
Models of borderline personality disorder: recent advances and new perspectives.
D'Agostino, Alessandra; Rossi Monti, Mario; Starcevic, Vladan
2018-01-01
The purpose of this article is to review the most relevant conceptual models of borderline personality disorder (BPD), with a focus on recent developments in this area. Several conceptual models have been proposed with the aim of better understanding BPD: the borderline personality organization, emotion dysregulation, reflective (mentalization) dysfunction, interpersonal hypersensitivity and hyperbolic temperament models. These models have all been supported to some extent and their common components include disorganized attachment and traumatic early experiences, emotion dysregulation, interpersonal sensitivity and difficulties with social cognition. An attempt to integrate some components of the conceptual models of BPD has resulted in an emerging new perspective, the interpersonal dysphoria model, which emphasizes dysphoria as an overarching phenomenon that connects the dispositional and situational aspects of BPD. Various conceptual models have expanded our understanding of BPD, but it appears that further development entails theoretical integration. More research is needed to better understand interactions between various components of BPD, including the situational factors that activate symptoms of BPD. This will help develop therapeutic approaches that are more tailored to the heterogeneous psychopathology of BPD.
Early Risers. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"Early Risers" is a multi-year prevention program for elementary school children demonstrating early aggressive and disruptive behavior. The intervention model includes two child-focused components and two parent/family components. The Child Skills component is designed to teach skills that enhance children's emotional and behavioral…
AAC Modeling Intervention Research Review
ERIC Educational Resources Information Center
Sennott, Samuel C.; Light, Janice C.; McNaughton, David
2016-01-01
A systematic review of research on the effects of interventions that include communication partner modeling of aided augmentative and alternative communication (AAC) on the language acquisition of individuals with complex communication needs was conducted. Included studies incorporated AAC modeling as a primary component of the intervention,…
Evaluating models of healthcare delivery using the Model of Care Evaluation Tool (MCET).
Hudspeth, Randall S; Vogt, Marjorie; Wysocki, Ken; Pittman, Oralea; Smith, Susan; Cooke, Cindy; Dello Stritto, Rita; Hoyt, Karen Sue; Merritt, T Jeanne
2016-08-01
Our aim was to provide the outcome of a structured Model of Care (MoC) Evaluation Tool (MCET), developed by an FAANP Best-practices Workgroup, that can be used to guide the evaluation of existing MoCs being considered for use in clinical practice. Multiple MoCs are available, but deciding which model of health care delivery to use can be confusing. This five-component tool provides a structured assessment approach to model selection and has universal application. A literature review using CINAHL, PubMed, Ovid, and EBSCO was conducted. The MCET evaluation process includes five sequential components with a feedback loop from component 5 back to component 3 for reevaluation of any refinements. The components are as follows: (1) Background, (2) Selection of an MoC, (3) Implementation, (4) Evaluation, and (5) Sustainability and Future Refinement. This practical resource considers an evidence-based approach to use in determining the best model to implement based on need, stakeholder considerations, and feasibility. ©2015 American Association of Nurse Practitioners.
Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?
ERIC Educational Resources Information Center
Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia
2011-01-01
Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…
Theoretical models of parental HIV disclosure: a critical review.
Qiao, Shan; Li, Xiaoming; Stanton, Bonita
2013-01-01
This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.
Side-Bet Theory and the Three-Component Model of Organizational Commitment
ERIC Educational Resources Information Center
Powell, Deborah M.; Meyer, John P.
2004-01-01
We tested Becker's (1960) side-bet conceptualization of commitment within the context of Meyer and Allen's (1991) three-component model of organizational commitment. Employees (N=202) from various organizations completed a survey including measures of (a) seven categories of side bets (b) affective, normative, and continuance commitment, and (c)…
How-to-Do-It: A Physical Model Illustrating Protein Synthesis on the Ribosome.
ERIC Educational Resources Information Center
Rogerson, Allen C.; Cheney, Richard W., Jr.
1989-01-01
Describes a way to help students grasp intermediate steps in the movement and relationships of the various components involved in the addition of an amino acid to a nascent peptide chain. Includes drawings of the model in operation, construction details, and suggested shapes and labeling of components. (RT)
Does the context of reinforcement affect resistance to change?
Nevin, J A; Grace, R C
1999-04-01
Eight pigeons were trained on multiple schedules of reinforcement where pairs of components alternated in blocks on different keys to define 2 local contexts. On 1 key, components arranged 160 and 40 reinforcers/hr; on the other, components arranged 40 and 10 reinforcers/hr. Response rates in the 40/hr component were higher in the latter pair. Within pairs, resistance to prefeeding and resistance to extinction were generally greater in the richer component. The two 40/hr components did not differ in resistance to prefeeding, but the 40/hr component that alternated with 10/hr was more resistant to extinction. This discrepancy was interpreted by an algebraic model relating response strength to component reinforcer rate, including generalization decrement. According to this model, strength is independent of context, consistent with research on schedule preference.
Body composition analysis: Cellular level modeling of body component ratios.
Wang, Z; Heymsfield, S B; Pi-Sunyer, F X; Gallagher, D; Pierson, R N
2008-01-01
During the past two decades, a major outgrowth of efforts by our research group at St. Luke's-Roosevelt Hospital is the development of body composition models that include cellular level models, models based on body component ratios, total body potassium models, multi-component models, and resting energy expenditure-body composition models. This review summarizes these models with emphasis on component ratios that we believe are fundamental to understanding human body composition during growth and development and in response to disease and treatments. In-vivo measurements reveal that in healthy adults some component ratios show minimal variability and are relatively 'stable', for example total body water/fat-free mass and fat-free mass density. These ratios can be effectively applied for developing body composition methods. In contrast, other ratios, such as total body potassium/fat-free mass, are highly variable in vivo and therefore are less useful for developing body composition models. In order to understand the mechanisms governing the variability of these component ratios, we have developed eight cellular level ratio models and from them we derived simplified models that share as a major determining factor the ratio of extracellular to intracellular water ratio (E/I). The E/I value varies widely among adults. Model analysis reveals that the magnitude and variability of each body component ratio can be predicted by correlating the cellular level model with the E/I value. Our approach thus provides new insights into and improved understanding of body composition ratios in adults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali
A computer implemented method monetizes the security of a cyber-system in terms of losses each stakeholder may expect to lose if a security break down occurs. A non-transitory media stores instructions for generating a stake structure that includes costs that each stakeholder of a system would lose if the system failed to meet security requirements and generating a requirement structure that includes probabilities of failing requirements when computer components fails. The system generates a vulnerability model that includes probabilities of a component failing given threats materializing and generates a perpetrator model that includes probabilities of threats materializing. The system generatesmore » a dot product of the stakes structure, the requirement structure, the vulnerability model and the perpetrator model. The system can further be used to compare, contrast and evaluate alternative courses of actions best suited for the stakeholders and their requirements.« less
Toward a Multicultural Model of the Stress Process.
ERIC Educational Resources Information Center
Slavin, Lesley A.; And Others
1991-01-01
Attempts to expand Lazarus and Folkman's stress model to include culture-relevant dimensions. Discusses cultural factors that influence each component of the stress model, including types and frequency of events experienced, appraisals of stressfulness of events, appraisals of available coping resources, selection of coping strategies, and…
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Patterns of IgE responses to multiple allergen components and clinical symptoms at age 11 years
Simpson, Angela; Lazic, Nevena; Belgrave, Danielle C.M.; Johnson, Phil; Bishop, Christopher; Mills, Clare; Custovic, Adnan
2015-01-01
Background The relationship between sensitization to allergens and disease is complex. Objective We sought to identify patterns of response to a broad range of allergen components and investigate associations with asthma, eczema, and hay fever. Methods Serum specific IgE levels to 112 allergen components were measured by using a multiplex array (Immuno Solid-phase Allergen Chip) in a population-based birth cohort. Latent variable modeling was used to identify underlying patterns of component-specific IgE responses; these patterns were then related to asthma, eczema, and hay fever. Results Two hundred twenty-one of 461 children had IgE to 1 or more components. Seventy-one of the 112 components were recognized by 3 or more children. By using latent variable modeling, 61 allergen components clustered into 3 component groups (CG1, CG2, and CG3); protein families within each CG were exclusive to that group. CG1 comprised 27 components from 8 plant protein families. CG2 comprised 7 components of mite allergens from 3 protein families. CG3 included 27 components of plant, animal, and fungal origin from 12 protein families. Each CG included components from different biological sources with structural homology and also nonhomologous proteins arising from the same biological source. Sensitization to CG3 was most strongly associated with asthma (odds ratio [OR], 8.20; 95% CI, 3.49-19.24; P < .001) and lower FEV1 (P < .001). Sensitization to CG1 was associated with hay fever (OR, 12.79; 95% CI, 6.84-23.90; P < .001). Sensitization to CG2 was associated with both asthma (OR, 3.60; 95% CI, 2.05-6.29) and hay fever (OR, 2.52; 95% CI, 1.38-4.61). Conclusions Latent variable modeling with a large number of allergen components identified 3 patterns of IgE responses, each including different protein families. In 11-year-old children the pattern of response to components of multiple allergens appeared to be associated with current asthma and hay fever but not eczema. PMID:25935108
NASA Technical Reports Server (NTRS)
Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.
2011-01-01
rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for component-loaded curved orthogrid panels typical of launch vehicle skin structures. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was applied to correlate the measured input sound pressures across the energized panel. This application quantifies the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software developed for the RPTF method allows easy replacement of the diffuse acoustic field with other pressure fields such as a turbulent boundary layer (TBL) model suitable for vehicle ascent. Structural responses using a TBL model were demonstrated, and wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this environment. Finally, design load factors were developed from the measured and predicted responses and compared with those derived from traditional techniques such as historical Mass Acceleration Curves and Barrett scaling methods for acreage and component-loaded panels.
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
Peridigm summary report : lessons learned in development with agile components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salinger, Andrew Gerhard; Mitchell, John Anthony; Littlewood, David John
2011-09-01
This report details efforts to deploy Agile Components for rapid development of a peridynamics code, Peridigm. The goal of Agile Components is to enable the efficient development of production-quality software by providing a well-defined, unifying interface to a powerful set of component-based software. Specifically, Agile Components facilitate interoperability among packages within the Trilinos Project, including data management, time integration, uncertainty quantification, and optimization. Development of the Peridigm code served as a testbed for Agile Components and resulted in a number of recommendations for future development. Agile Components successfully enabled rapid integration of Trilinos packages into Peridigm. A cost of thismore » approach, however, was a set of restrictions on Peridigm's architecture which impacted the ability to track history-dependent material data, dynamically modify the model discretization, and interject user-defined routines into the time integration algorithm. These restrictions resulted in modifications to the Agile Components approach, as implemented in Peridigm, and in a set of recommendations for future Agile Components development. Specific recommendations include improved handling of material states, a more flexible flow control model, and improved documentation. A demonstration mini-application, SimpleODE, was developed at the onset of this project and is offered as a potential supplement to Agile Components documentation.« less
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Extended Day Treatment: A Comprehensive Model of after School Behavioral Health Services for Youth
ERIC Educational Resources Information Center
Vanderploeg, Jeffrey J.; Franks, Robert P.; Plant, Robert; Cloud, Marilyn; Tebes, Jacob Kraemer
2009-01-01
Extended day treatment (EDT) is an innovative intermediate-level service for children and adolescents with serious emotional and behavioral disorders delivered during the after school hours. This paper describes the core components of the EDT model of care within the context of statewide systems of care, including its core service components,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Lehua; Oldenburg, Curtis M.
TOGA is a numerical reservoir simulator for modeling non-isothermal flow and transport of water, CO 2, multicomponent oil, and related gas components for applications including CO 2-enhanced oil recovery (CO 2-EOR) and geologic carbon sequestration in depleted oil and gas reservoirs. TOGA uses an approach based on the Peng-Robinson equation of state (PR-EOS) to calculate the thermophysical properties of the gas and oil phases including the gas/oil components dissolved in the aqueous phase, and uses a mixing model to estimate the thermophysical properties of the aqueous phase. The phase behavior (e.g., occurrence and disappearance of the three phases, gas +more » oil + aqueous) and the partitioning of non-aqueous components (e.g., CO 2, CH 4, and n-oil components) between coexisting phases are modeled using K-values derived from assumptions of equal-fugacity that have been demonstrated to be very accurate as shown by comparison to measured data. Models for saturated (water) vapor pressure and water solubility (in the oil phase) are used to calculate the partitioning of the water (H 2O) component between the gas and oil phases. All components (e.g., CO 2, H 2O, and n hydrocarbon components) are allowed to be present in all phases (aqueous, gaseous, and oil). TOGA uses a multiphase version of Darcy’s Law to model flow and transport through porous media of mixtures with up to three phases over a range of pressures and temperatures appropriate to hydrocarbon recovery and geologic carbon sequestration systems. Transport of the gaseous and dissolved components is by advection and Fickian molecular diffusion. New methods for phase partitioning and thermophysical property modeling in TOGA have been validated against experimental data published in the literature for describing phase partitioning and phase behavior. Flow and transport has been verified by testing against related TOUGH2 EOS modules and CMG. The code has also been validated against a CO 2-EOR experimental core flood involving flow of three phases and 12 components. Results of simulations of a hypothetical 3D CO 2-EOR problem involving three phases and multiple components are presented to demonstrate the field-scale capabilities of the new code. This user guide provides instructions for use and sample problems for verification and demonstration.« less
Optimization of replacement and inspection decisions for multiple components on a power system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauney, D.A.
1994-12-31
The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less
Space Station Freedom electric power system availability study
NASA Technical Reports Server (NTRS)
Turnquist, Scott R.
1990-01-01
The results are detailed of follow-on availability analyses performed on the Space Station Freedom electric power system (EPS). The scope includes analyses of several EPS design variations, these are: the 4-photovoltaic (PV) module baseline EPS design, a 6-PV module EPS design, and a 3-solar dynamic module EPS design which included a 10 kW PV module. The analyses performed included: determining the discrete power levels that the EPS will operate at upon various component failures and the availability of each of these operating states; ranking EPS components by the relative contribution each component type gives to the power availability of the EPS; determining the availability impacts of including structural and long-life EPS components in the availability models used in the analyses; determining optimum sparing strategies, for storing space EPS components on-orbit, to maintain high average-power-capability with low lift-mass requirements; and analyses to determine the sensitivity of EPS-availability to uncertainties in the component reliability and maintainability data used.
Response Surface Modeling of Combined-Cycle Propulsion Components using Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.
2002-01-01
Three examples of response surface modeling with CFD are presented for combined cycle propulsion components. The examples include a mixed-compression-inlet during hypersonic flight, a hydrogen-fueled scramjet combustor during hypersonic flight, and a ducted-rocket nozzle during all-rocket flight. Three different experimental strategies were examined, including full factorial, fractionated central-composite, and D-optimal with embedded Plackett-Burman designs. The response variables have been confined to integral data extracted from multidimensional CFD results. Careful attention to uncertainty assessment and modeling bias has been addressed. The importance of automating experimental setup and effectively communicating statistical results are emphasized.
Giant gain from spontaneously generated coherence in Y-type double quantum dot structure
NASA Astrophysics Data System (ADS)
Al-Nashy, B.; Razzaghi, Sonia; Al-Musawi, Muwaffaq Abdullah; Rasooli Saghai, H.; Al-Khursan, Amin H.
A theoretical model was presented for linear susceptibility using density matrix theory for Y-configuration of double quantum dots (QDs) system including spontaneously generated coherence (SGC). Two SGC components are included for this system: V, and Λ subsystems. It is shown that at high V-component, the system have a giga gain. At low Λ-system component; it is possible to controls the light speed between superluminal and subluminal using one parameter by increasing SGC component of the V-system. This have applications in quantum information storage and spatially-varying temporal clock.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
A Practical Application of Microcomputers to Control an Active Solar System.
ERIC Educational Resources Information Center
Goldman, David S.; Warren, William
1984-01-01
Describes the design and implementation of a microcomputer-based model active solar heating system. Includes discussions of: (1) the active solar components (solar collector, heat exchanger, pump, and fan necessary to provide forced air heating); (2) software components; and (3) hardware components (in the form of sensors and actuators). (JN)
Interactions between Flight Dynamics and Propulsion Systems of Air-Breathing Hypersonic Vehicles
2013-01-01
coupled with combustor – Combustor, component for subsonic or supersonic combustion – Nozzle , expands flow for high thrust and may provide lift... supersonic solution method that is used for both the inlet and nozzle components. The supersonic model SAMURI is a substantial improvement over previous models...purely supersonic inviscid flow. As a result, the model is also appropriate for other applications, including the nozzle , which is important 19 Figure
A Catalog of Galaxy Clusters Observed by XMM-Newton
NASA Technical Reports Server (NTRS)
Snowden, S. L.; Mushotzky, R. M.; Kuntz, K. D.; Davis, David S.
2007-01-01
Images and the radial profiles of the temperature, abundance, and brightness for 70 clusters of galaxies observed by XMM-Newton are presented along with a detailed discussion of the data reduction and analysis methods, including background modeling, which were used in the processing. Proper consideration of the various background components is vital to extend the reliable determination of cluster parameters to the largest possible cluster radii. The various components of the background including the quiescent particle background, cosmic diffuse emission, soft proton contamination, and solar wind charge exchange emission are discussed along with suggested means of their identification, filtering, and/or their modeling and subtraction. Every component is spectrally variable, sometimes significantly so, and all components except the cosmic background are temporally variable as well. The distributions of the events over the FOV vary between the components, and some distributions vary with energy. The scientific results from observations of low surface brightness objects and the diffuse background itself can be strongly affected by these background components and therefore great care should be taken in their consideration.
Advanced Turbine Technology Applications Project (ATTAP)
NASA Technical Reports Server (NTRS)
1994-01-01
Reports technical effort by AlliedSignal Engines in sixth year of DOE/NASA funded project. Topics include: gas turbine engine design modifications of production APU to incorporate ceramic components; fabrication and processing of silicon nitride blades and nozzles; component and engine testing; and refinement and development of critical ceramics technologies, including: hot corrosion testing and environmental life predictive model; advanced NDE methods for internal flaws in ceramic components; and improved carbon pulverization modeling during impact. ATTAP project is oriented toward developing high-risk technology of ceramic structural component design and fabrication to carry forward to commercial production by 'bridging the gap' between structural ceramics in the laboratory and near-term commercial heat engine application. Current ATTAP project goal is to support accelerated commercialization of advanced, high-temperature engines for hybrid vehicles and other applications. Project objectives are to provide essential and substantial early field experience demonstrating ceramic component reliability and durability in modified, available, gas turbine engine applications; and to scale-up and improve manufacturing processes of ceramic turbine engine components and demonstrate application of these processes in the production environment.
The Schoolwide Enrichment Model: A Focus on Student Strengths and Interests
ERIC Educational Resources Information Center
Renzulli, Joseph S.; Renzulli, Sally Reis
2010-01-01
This article includes an introduction to the Schoolwide Enrichment Model (SEM), with its three components: a total talent portfolio for each child, curriculum differentiation and modification, and enrichment opportunities from the Enrichment Triad Model. Also included is a brief history of the SEM and a summary of 30 years of research underlying…
Using WNTR to Model Water Distribution System Resilience ...
The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres
NDARC NASA Design and Analysis of Rotorcraft. Appendix 5; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC: NASA Design and Analysis of Rotorcraft. Appendix 3; Theory
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet speci?ed requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft con?gurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates con?guration ?exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-?delity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy speci?ed design conditions and missions. The analysis tasks can include off-design mission performance calculation, ?ight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft con?gurations is facilitated, while retaining the capability to model novel and advanced concepts. Speci?c rotorcraft con?gurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-?delity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 2
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tilt-rotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft. Appendix 6; Input
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2017-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne R.
2009-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and tailrotor helicopter; tandem helicopter; coaxial helicopter; and tiltrotors. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC - NASA Design and Analysis of Rotorcraft
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2015-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
NDARC NASA Design and Analysis of Rotorcraft Theory Appendix 1
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2016-01-01
The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.
CELSS scenario analysis: Breakeven calculations
NASA Technical Reports Server (NTRS)
Mason, R. M.
1980-01-01
A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.
Lateral density anomalies and the earth's gravitational field
NASA Technical Reports Server (NTRS)
Lowrey, B. E.
1978-01-01
The interpretation of gravity is valuable for understanding lithospheric plate motion and mantle convection. Postulated models of anomalous mass distributions in the earth and the observed geopotential as expressed in the spherical harmonic expansion are compared. In particular, models of the anomalous density as a function of radius are found which can closely match the average magnitude of the spherical harmonic coefficients of a degree. These models include: (1) a two-component model consisting of an anomalous layer at 200 km depth (below the earth's surface) and at 1500 km depth (2) a two-component model where the upper component is distributed in the region between 1000 and 2800 km depth, and(3) a model with density anomalies which continuously increase with depth more than an order of magnitude.
Peer Models in Mental Health for Caregivers and Families.
Acri, Mary; Hooley, Cole D; Richardson, Nicole; Moaba, Lily B
2017-02-01
Peer-delivered mental health models may hold important benefits for family members, yet their prevalence, components, and outcomes are unknown. We conducted a review of peer-delivered services for families of children and adults with mental health problems. Randomized studies of interventions published between 1990 and 2014 were included if the intervention contained a component for family members and examined familial outcomes. Of 77 studies that were assessed for their eligibility, six met criteria. Familial components included coping and parenting skills, knowledge about mental health, and emotional support. Outcomes were uneven, although significant improvements in family functioning, knowledge about mental illness, parental concerns about their child, and parenting skills were associated with the intervention. Peer-delivered services for family members may have important benefits to family members and individuals with mental health problems; however, the research base remains thin. A research agenda to develop and examine these models is discussed.
Mangalgiri, Kiranmayi P; Timko, Stephen A; Gonsior, Michael; Blaney, Lee
2017-07-18
Parallel factor analysis (PARAFAC) applied to fluorescence excitation emission matrices (EEMs) allows quantitative assessment of the composition of fluorescent dissolved organic matter (DOM). In this study, we fit a four-component EEM-PARAFAC model to characterize DOM extracted from poultry litter. The data set included fluorescence EEMs from 291 untreated, irradiated (253.7 nm, 310-410 nm), and oxidized (UV-H 2 O 2 , ozone) poultry litter extracts. The four components were identified as microbial humic-, terrestrial humic-, tyrosine-, and tryptophan-like fluorescent signatures. The Tucker's congruence coefficients for components from the global (i.e., aggregated sample set) model and local (i.e., single poultry litter source) models were greater than 0.99, suggesting that the global EEM-PARAFAC model may be suitable to study poultry litter DOM from individual sources. In general, the transformation trends of the four fluorescence components were comparable for all poultry litter sources tested. For irradiation at 253.7 nm, ozonation, and UV-H 2 O 2 advanced oxidation, transformation of the humic-like components was slower than that of the tryptophan-like component. The opposite trend was observed for irradiation at 310-410 nm, due to differences in UV absorbance properties of components. Compared to the other EEM-PARAFAC components, the tyrosine-like component was fairly recalcitrant in irradiation and oxidation processes. This novel application of EEM-PARAFAC modeling provides insight into the composition and fate of agricultural DOM in natural and engineered systems.
O'Donnell, Allison N; Williams, Mark; Kilbourne, Amy M
2013-12-01
The Chronic Care Model (CCM) has been shown to improve medical and psychiatric outcomes for persons with mental disorders in primary care settings, and has been proposed as a model to integrate mental health care in the patient-centered medical home under healthcare reform. However, the CCM has not been widely implemented in primary care settings, primarily because of a lack of a comprehensive reimbursement strategy to compensate providers for day-to-day provision of its core components, including care management and provider decision support. Drawing upon the existing literature and regulatory guidelines, we provide a critical analysis of challenges and opportunities in reimbursing CCM components under the current fee-for-service system, and describe an emerging financial model involving bundled payments to support core CCM components to integrate mental health treatment into primary care settings. Ultimately, for the CCM to be used and sustained over time to integrate physical and mental health care, effective reimbursement models will need to be negotiated across payers and providers. Such payments should provide sufficient support for primary care providers to implement practice redesigns around core CCM components, including care management, measurement-based care, and mental health specialist consultation.
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce
2011-06-01
We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one lacked 2, one lacked 3, and one lacked 4 of the 8 model components. Successful models of ED-based case management models for older adults share certain key characteristics. This study builds on the emerging literature in this area and leverages the differences in these models and their associated outcomes to support the development of an evidence-based normative and effective geriatric emergency management practice model designed to address the special care needs and thereby improve the health and health service utilization outcomes of older patients. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saxena, Vikrant, E-mail: vikrant.saxena@desy.de; Hamburg Center for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg; Ziaja, Beata, E-mail: ziaja@mail.desy.de
The irradiation of an atomic cluster with a femtosecond x-ray free-electron laser pulse results in a nanoplasma formation. This typically occurs within a few hundred femtoseconds. By this time the x-ray pulse is over, and the direct photoinduced processes no longer contributing. All created electrons within the nanoplasma are thermalized. The nanoplasma thus formed is a mixture of atoms, electrons, and ions of various charges. While expanding, it is undergoing electron impact ionization and three-body recombination. Below we present a hydrodynamic model to describe the dynamics of such multi-component nanoplasmas. The model equations are derived by taking the moments ofmore » the corresponding Boltzmann kinetic equations. We include the equations obtained, together with the source terms due to electron impact ionization and three-body recombination, in our hydrodynamic solver. Model predictions for a test case, expanding spherical Ar nanoplasma, are obtained. With this model, we complete the two-step approach to simulate x-ray created nanoplasmas, enabling computationally efficient simulations of their picosecond dynamics. Moreover, the hydrodynamic framework including collisional processes can be easily extended for other source terms and then applied to follow relaxation of any finite non-isothermal multi-component nanoplasma with its components relaxed into local thermodynamic equilibrium.« less
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
Pedler, Ashley; Kamper, Steven J; Sterling, Michele
2016-08-01
The fear avoidance model (FAM) has been proposed to explain the development of chronic disability in a variety of conditions including whiplash-associated disorders (WADs). The FAM does not account for symptoms of posttraumatic stress and sensory hypersensitivity, which are associated with poor recovery from whiplash injury. The aim of this study was to explore a model for the maintenance of pain and related disability in people with WAD including symptoms of PTSD, sensory hypersensitivity, and FAM components. The relationship between individual components in the model and disability and how these relationships changed over the first 12 weeks after injury were investigated. We performed a longitudinal study of 103 (74 female) patients with WAD. Measures of pain intensity, cold and mechanical pain thresholds, symptoms of posttraumatic stress, pain catastrophising, kinesiophobia, and fear of cervical spine movement were collected within 6 weeks of injury and at 12 weeks after injury. Mixed-model analysis using Neck Disability Index (NDI) scores and average 24-hour pain intensity as the dependent variables revealed that overall model fit was greatest when measures of fear of movement, posttraumatic stress, and sensory hypersensitivity were included. The interactive effects of time with catastrophising and time with fear of activity of the cervical spine were also included in the best model for disability. These results provide preliminary support for the addition of neurobiological and stress system components to the FAM to explain poor outcome in patients with WAD.
Facione, N C
1993-03-01
The Triandis model of social behavior offers exceptional promise to nurse researchers whose goal is to achieve cultural sensitivity in their research investigations. The model includes six components: consequential beliefs, affect, social influences, previous behavioral habits, physiologic arousal, and facilitating environmental resources. A directed methodology to include culture-relevant items in the measurement of each of these model components allows researchers to capture the diverse explanations of health and illness behavior that might pertain in diverse populations. Researchers utilizing the model can achieve theory-based explanations of differences they observe by gender, race/ethnicity, social class, and sexual orientation. The Triandis model can provide studies to target variables for future intervention studies, as well as highlight areas for needed political action to equalize access to and delivery of nursing care.
Markstrom, Steven L.; Niswonger, Richard G.; Regan, R. Steven; Prudic, David E.; Barlow, Paul M.
2008-01-01
The need to assess the effects of variability in climate, biota, geology, and human activities on water availability and flow requires the development of models that couple two or more components of the hydrologic cycle. An integrated hydrologic model called GSFLOW (Ground-water and Surface-water FLOW) was developed to simulate coupled ground-water and surface-water resources. The new model is based on the integration of the U.S. Geological Survey Precipitation-Runoff Modeling System (PRMS) and the U.S. Geological Survey Modular Ground-Water Flow Model (MODFLOW). Additional model components were developed, and existing components were modified, to facilitate integration of the models. Methods were developed to route flow among the PRMS Hydrologic Response Units (HRUs) and between the HRUs and the MODFLOW finite-difference cells. This report describes the organization, concepts, design, and mathematical formulation of all GSFLOW model components. An important aspect of the integrated model design is its ability to conserve water mass and to provide comprehensive water budgets for a location of interest. This report includes descriptions of how water budgets are calculated for the integrated model and for individual model components. GSFLOW provides a robust modeling system for simulating flow through the hydrologic cycle, while allowing for future enhancements to incorporate other simulation techniques.
Stigson, Helena; Krafft, Maria; Tingvall, Claes
2008-10-01
To evaluate if the Swedish Road Administration (SRA) model for a safe road transport system, which includes the interaction between the road user, the vehicle, and the road, could be used to classify fatal car crashes according to some safety indicators. Also, to present a development of the model to better identify system weakness. Real-life crashes with a fatal outcome were classified according to the vehicle's safety rating by Euro NCAP (European Road Assessment Programme) and fitment of ESC (Electronic Stability Control). For each crash, the road was also classified according to EuroRAP (European Road Assessment Programme) criteria, and human behavior in terms of speeding, seat belt use, and driving under the influence of alcohol. Each crash was compared with the model criteria, to identify components that might have contributed to fatal outcome. All fatal crashes where a car occupant was killed that occurred in Sweden during 2004 were included: in all, 215 crashes with 248 fatalities. The data were collected from the in-depth fatal crash data of the Swedish Road Administration (SRA). It was possible to classify 93% of the fatal car crashes according to the SRA model. A number of shortcomings in the criteria were identified since the model did not address rear-end or animal collisions or collisions with stationary/parked vehicles or trailers (18 out of 248 cases). Using the further developed model, it was possible to identify that most of the crashes occurred when two or all three components interacted (in 85 of the total 230 cases). Noncompliance with safety criteria for the road user, the vehicle, and the road led to fatal outcome in 43, 27, and 75 cases, respectively. The SRA model was found to be useful for classifying fatal crashes but needs to be further developed to identify how the components interact and thereby identify weaknesses in the road traffic system. This developed model might be a tool to systematically identify which of the components are linked to fatal outcome. In the presented study, fatal outcomes were mostly related to an interaction between the three components: the road, the vehicle, and the road user. Of the three components, the road was the one that was most often linked to a fatal outcome.
A Model-Driven, Science Data Product Registration Service
NASA Astrophysics Data System (ADS)
Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.
2011-12-01
The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various artifacts above as well as offering the flexibility to support customer-defined artifacts. Key features for the Registry Service include: - Model-based configuration specifying customer-defined artifact types, metadata attributes to capture for each artifact type, supported associations and classification schemes. - A REST-based external interface that is accessible via the Hypertext Transfer Protocol (HTTP). - Federation of Registry Service instances allowing associations between registered artifacts across registries as well as queries for artifacts across those same registries. A federation also enables features such as replication and synchronization if desired for a given deployment. In addition to its use as a core component of the PDS, the generic implementation of the Registry Service facilitates its applicability as a core component in any science data archive or science data system.
Bio-chemo-mechanical models of vascular mechanics
Kim, Jungsil; Wagenseil, Jessica E.
2014-01-01
Models of vascular mechanics are necessary to predict the response of an artery under a variety of loads, for complex geometries, and in pathological adaptation. Classic constitutive models for arteries are phenomenological and the fitted parameters are not associated with physical components of the wall. Recently, microstructurally-linked models have been developed that associate structural information about the wall components with tissue-level mechanics. Microstructurally-linked models are useful for correlating changes in specific components with pathological outcomes, so that targeted treatments may be developed to prevent or reverse the physical changes. However, most treatments, and many causes, of vascular disease have chemical components. Chemical signaling within cells, between cells, and between cells and matrix constituents affects the biology and mechanics of the arterial wall in the short- and long-term. Hence, bio-chemo-mechanical models that include chemical signaling are critical for robust models of vascular mechanics. This review summarizes bio-mechanical and bio-chemo-mechanical models with a focus on large elastic arteries. We provide applications of these models and challenges for future work. PMID:25465618
Glass Transition Temperature- and Specific Volume- Composition Models for Tellurite Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Brian J.; Vienna, John D.
This report provides models for predicting composition-properties for tellurite glasses, namely specific gravity and glass transition temperature. Included are the partial specific coefficients for each model, the component validity ranges, and model fit parameters.
What is the "Clim-Likely" aerosol product?
Atmospheric Science Data Center
2014-12-08
... identifying a range of components and mixtures for the MISR Standard Aerosol Retrieval Algorithm climatology, and as one standard against ... retrieval results. Six component aerosols included in the model were medium and coarse mode mineral dust, sulfate, sea salt, black ...
NASA Astrophysics Data System (ADS)
Kovács, G.
2009-09-01
Current status of (the lack of) understanding Blazhko effect is reviewed. We focus mostly on the various components of the failure of the models and touch upon the observational issues only at a degree needed for the theoretical background. Attention is to be paid to models based on radial mode resonances, since they seem to be not fully explored yet, especially if we consider possible non-standard effects (e.g., heavy element enhancement). To aid further modeling efforts, we stress the need for accurate time-series spectral line analysis to reveal any possible non-radial component(s) and thereby let to include (or exclude) non-radial modes in explaining the Blazhko phenomenon.
Joint Spatio-Temporal Shared Component Model with an Application in Iran Cancer Data
Mahaki, Behzad; Mehrabi, Yadollah; Kavousi, Amir; Schmid, Volker J
2018-06-25
Background: Among the proposals for joint disease mapping, the shared component model has become more popular. Another advance to strengthen inference of disease data is the extension of purely spatial models to include time aspect. We aim to combine the idea of multivariate shared components with spatio-temporal modelling in a joint disease mapping model and apply it for incidence rates of seven prevalent cancers in Iran which together account for approximately 50% of all cancers. Methods: In the proposed model, each component is shared by different subsets of diseases, spatial and temporal trends are considered for each component, and the relative weight of these trends for each component for each relevant disease can be estimated. Results: For esophagus and stomach cancers the Northern provinces was the area of high risk. For colorectal cancer Gilan, Semnan, Fars, Isfahan, Yazd and East-Azerbaijan were the highest risk provinces. For bladder and lung cancer, the northwest were the highest risk area. For prostate and breast cancers, Isfahan, Yazd, Fars, Tehran, Semnan, Mazandaran and Khorasane-Razavi were the highest risk part. The smoking component, shared by esophagus, stomach, bladder and lung, had more effect in Gilan, Mazandaran, Chaharmahal and Bakhtiari, Kohgilouyeh and Boyerahmad, Ardebil and Tehran provinces, in turn. For overweight and obesity component, shared by esophagus, colorectal, prostate and breast cancers the largest effect was found for Tehran, Khorasane-Razavi, Semnan, Yazd, Isfahan, Fars, Mazandaran and Gilan, in turn. For low physical activity component, shared by colorectal and breast cancers North-Khorasan, Ardebil, Golestan, Ilam, Khorasane-Razavi and South-Khorasan had the largest effects, in turn. The smoking component is significantly more important for stomach than for esophagus, bladder and lung. The overweight and obesity had significantly more effect for colorectal than of esophagus cancer. Conclusions: The presented model is a valuable model to model geographical and temporal variation among diseases and has some interesting potential features and benefits over other joint models. Creative Commons Attribution License
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-01-11
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH{sub 4}H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less
Service Modeling Language Applied to Critical Infrastructure
NASA Astrophysics Data System (ADS)
Baldini, Gianmarco; Fovino, Igor Nai
The modeling of dependencies in complex infrastructure systems is still a very difficult task. Many methodologies have been proposed, but a number of challenges still remain, including the definition of the right level of abstraction, the presence of different views on the same critical infrastructure and how to adequately represent the temporal evolution of systems. We propose a modeling methodology where dependencies are described in terms of the service offered by the critical infrastructure and its components. The model provides a clear separation between services and the underlying organizational and technical elements, which may change in time. The model uses the Service Modeling Language proposed by the W3 consortium for describing critical infrastructure in terms of interdependent services nodes including constraints, behavior, information flows, relations, rules and other features. Each service node is characterized by its technological, organizational and process components. The model is then applied to a real case of an ICT system for users authentication.
The Quark's Model and Confinement
ERIC Educational Resources Information Center
Novozhilov, Yuri V.
1977-01-01
Quarks are elementary particles considered to be components of the proton, the neutron, and others. This article presents the quark model as a mathematical concept. Also discussed are gluons and bag models. A bibliography is included. (MA)
The anatomy of group dysfunction.
Hayes, David F
2014-04-01
The dysfunction of the radiology group has 2 components: (1) the thinking component-the governance structure of the radiology group; how we manage the group; and (2) the structural component-the group's business model and its conflict with the partner's personal business model. Of the 2 components, governance is more important. Governance must be structured on classic, immutable business management principles. The structural component, the business model, is not immutable. In fact, it must continually change in response to the marketplace. Changes in the business model should occur only if demanded or permitted by the marketplace; instituting changes for other reasons, including personal interests or deficient knowledge of the deciders, is fundamentally contrary to the long-term interests of the group and its owners. First, we must learn basic business management concepts to appreciate the function and necessity of standard business models and standard business governance. Peter Drucker's The Effective Executive is an excellent primer on the subjects of standard business practices and the importance of a functional, authorized, and fully accountable chief executive officer. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Khlaisang, Jintavee
2010-01-01
The purpose of this study was to investigate proper website and courseware for e-learning in higher education. Methods used in this study included the data collection, the analysis surveys, the experts' in-depth interview, and the experts' focus group. Results indicated that there were 16 components for website, as well as 16 components for…
What Is the Right RFID for Your Process?
2006-01-30
Support Model for Valuing Proposed Improvements in Component Reliability. June 2005. NPS-PM-05-007 Dillard, John T., and Mark E. Nissen...Arlington, VA. 2005. Kang, Keebom, Ken Doerr, Uday Apte, and Michael Boudreau. “Decision Support Models for Valuing Improvements in Component...courses in the Executive and Full-time MBA programs. Areas of Uday’s research interests include managing service operations, supply chain
Predicting the tensile strength of compacted multi-component mixtures of pharmaceutical powders.
Wu, Chuan-Yu; Best, Serena M; Bentham, A Craig; Hancock, Bruno C; Bonfield, William
2006-08-01
Pharmaceutical tablets are generally produced by compacting a mixture of several ingredients, including active drugs and excipients. It is of practical importance if the properties of such tablets can be predicted on the basis of the ones for constituent components. The purpose of this work is to develop a theoretical model which can predict the tensile strength of compacted multi-component pharmaceutical mixtures. The model was derived on the basis of the Ryshkewitch-Duckworth equation that was originally proposed for porous materials. The required input parameters for the model are the relative density or solid fraction (ratio of the volume of solid materials to the total volume of the tablets) of the multi-component tablets and parameters associated with the constituent single-component powders, which are readily accessible. The tensile strength of tablets made of various powder blends at different relative density was also measured using diametrical compression. It has been shown that the tensile strength of the multi-component powder compacts is primarily a function of the solid fraction. Excellent agreement between prediction and experimental data for tablets of binary, ternary and four-component blends of some widely used pharmaceutical excipients was obtained. It has been demonstrated that the proposed model can well predict the tensile strength of multi-component pharmaceutical tablets. Thus, the model will be a useful design tool for formulation engineers in the pharmaceutical industry.
Atmospheric Constituents in GEOS-5: Components for an Earth System Model
NASA Technical Reports Server (NTRS)
Pawson, Steven; Douglass, Anne; Duncan, Bryan; Nielsen, Eric; Ott, Leslie; Strode, Sarah
2011-01-01
The GEOS-S model is being developed for weather and climate processes, including the implementation of "Earth System" components. While the stratospheric chemistry capabilities are mature, we are presently extending this to include predictions of the tropospheric composition and chemistry - this includes CO2, CH4, CO, nitrogen species, etc. (Aerosols are also implemented, but are beyond the scope of this paper.) This work will give an overview of our chemistry modules, the approaches taken to represent surface emissions and uptake of chemical species, and some studies of the sensitivity of the atmospheric circulation to changes in atmospheric composition. Results are obtained through focused experiments and multi-decadal simulations.
Revisiting the pole tide for and from satellite altimetry
NASA Astrophysics Data System (ADS)
Desai, Shailen; Wahr, John; Beckley, Brian
2015-12-01
Satellite altimeter sea surface height observations include the geocentric displacements caused by the pole tide, namely the response of the solid Earth and oceans to polar motion. Most users of these data remove these effects using a model that was developed more than 20 years ago. We describe two improvements to the pole tide model for satellite altimeter measurements. Firstly, we recommend an approach that improves the model for the response of the oceans by including the effects of self-gravitation, loading, and mass conservation. Our recommended approach also specifically includes the previously ignored displacement of the solid Earth due to the load of the ocean response, and includes the effects of geocenter motion. Altogether, this improvement amplifies the modeled geocentric pole tide by 15 %, or up to 2 mm of sea surface height displacement. We validate this improvement using two decades of satellite altimeter measurements. Secondly, we recommend that the altimetry pole tide model exclude geocentric sea surface displacements resulting from the long-term drift in polar motion. The response to this particular component of polar motion requires a more rigorous approach than is used by conventional models. We show that erroneously including the response to this component of polar motion in the pole tide model impacts interpretation of regional sea level rise by ± 0.25 mm/year.
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2011-01-01
Summary We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook’s distance, and a local jackknife influence metric. PMID:22058582
Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A
2006-06-01
When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.
Modeling of NASA's 30/20 GHz satellite communications system
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Maples, B. W.; Stevens, G. A.
1984-01-01
NASA is in the process of developing technology for a 30/20 GHz satellite communications link. Currently hardware is being assembled for a test transponder. A simulation package is being developed to study the link performance in the presence of interference and noise. This requires developing models for the components of the system. This paper describes techniques used to model the components for which data is available. Results of experiments performed using these models are described. A brief overview of NASA's 30/20 GHz communications satellite program is also included.
Return-to-Work Within a Complex and Dynamic Organizational Work Disability System.
Jetha, Arif; Pransky, Glenn; Fish, Jon; Hettinger, Lawrence J
2016-09-01
Background Return-to-work (RTW) within a complex organizational system can be associated with suboptimal outcomes. Purpose To apply a sociotechnical systems perspective to investigate complexity in RTW; to utilize system dynamics modeling (SDM) to examine how feedback relationships between individual, psychosocial, and organizational factors make up the work disability system and influence RTW. Methods SDMs were developed within two companies. Thirty stakeholders including senior managers, and frontline supervisors and workers participated in model building sessions. Participants were asked questions that elicited information about the structure of the work disability system and were translated into feedback loops. To parameterize the model, participants were asked to estimate the shape and magnitude of the relationship between key model components. Data from published literature were also accessed to supplement participant estimates. Data were entered into a model created in the software program Vensim. Simulations were conducted to examine how financial incentives and light duty work disability-related policies, utilized by the participating companies, influenced RTW likelihood and preparedness. Results The SDMs were multidimensional, including individual attitudinal characteristics, health factors, and organizational components. Among the causal pathways uncovered, psychosocial components including workplace social support, supervisor and co-worker pressure, and supervisor-frontline worker communication impacted RTW likelihood and preparedness. Interestingly, SDM simulations showed that work disability-related policies in both companies resulted in a diminishing or opposing impact on RTW preparedness and likelihood. Conclusion SDM provides a novel systems view of RTW. Policy and psychosocial component relationships within the system have important implications for RTW, and may contribute to unanticipated outcomes.
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.
Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis
2016-07-01
Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.
NASA Astrophysics Data System (ADS)
Kamal, S.; Maslowski, W.; Roberts, A.; Osinski, R.; Cassano, J. J.; Seefeldt, M. W.
2017-12-01
The Regional Arctic system model has been developed and used to advance the current state of Arctic modeling and increase the skill of sea ice forecast. RASM is a fully coupled, limited-area model that includes the atmosphere, ocean, sea ice, land hydrology and runoff routing components and the flux coupler to exchange information among them. Boundary conditions are derived from NCEP Climate Forecasting System Reanalyses (CFSR) or Era Iterim (ERA-I) for hindcast simulations or from NCEP Coupled Forecast System Model version 2 (CFSv2) for seasonal forecasts. We have used RASM to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook (SIO) of the Sea Ice Prediction Network (SIPN). Each year, we produced three SIOs for the September minimum, initialized on June 1, July 1 and August 1. In 2016, predictions used a simple linear regression model to correct for systematic biases and included the mean September sea ice extent, the daily minimum and the week of the minimum. In 2017, we produced a 12-member ensemble on June 1 and July 1, and 28-member ensemble August 1. The predictions of September 2017 included the pan-Arctic and regional Alaskan sea ice extent, daily and monthly mean pan-Arctic maps of sea ice probability, concentration and thickness. No bias correction was applied to the 2017 forecasts. Finally, we will also discuss future plans for RASM forecasts, which include increased resolution for model components, ecosystem predictions with marine biogeochemistry extensions (mBGC) to the ocean and sea ice components, and feasibility of optional boundary conditions using the Navy Global Environmental Model (NAVGEM).
VEEP - Vehicle Economy, Emissions, and Performance program
NASA Technical Reports Server (NTRS)
Heimburger, D. A.; Metcalfe, M. A.
1977-01-01
VEEP is a general-purpose discrete event simulation program being developed to study the performance, fuel economy, and exhaust emissions of a vehicle modeled as a collection of its separate components. It is written in SIMSCRIPT II.5. The purpose of this paper is to present the design methodology, describe the simulation model and its components, and summarize the preliminary results. Topics include chief programmer team concepts, the SDDL design language, program portability, user-oriented design, the program's user command syntax, the simulation procedure, and model validation.
NASA Astrophysics Data System (ADS)
Allard, R. A.; Campbell, T. J.; Edwards, K. L.; Smith, T.; Martin, P.; Hebert, D. A.; Rogers, W.; Dykes, J. D.; Jacobs, G. A.; Spence, P. L.; Bartels, B.
2014-12-01
The Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®) is an atmosphere-ocean-wave modeling system developed by the Naval Research Laboratory which can be configured to cycle regional forecasts/analysis models in single-model (atmosphere, ocean, and wave) or coupled-model (atmosphere-ocean, ocean-wave, and atmosphere-ocean-wave) modes. The model coupling is performed using the Earth System Modeling Framework (ESMF). The ocean component is the Navy Coastal Ocean Model (NCOM), and the wave components include Simulating WAves Nearshore (SWAN) and WaveWatch-III. NCOM has been modified to include wetting and drying, the effects of Stokes drift current, wave radiation stresses due to horizontal gradients of the momentum flux of surface waves, enhancement of bottom drag in shallow water, and enhanced vertical mixing due to Langmuir turbulence. An overview of the modeling system including ocean data assimilation and specification of boundary conditions will be presented. Results from a high-resolution (10-250m) modeling study from the Surfzone Coastal Oil Pathways Experiment (SCOPE) near Ft. Walton Beach, Florida in December 2013 will be presented. ®COAMPS is a registered trademark of the Naval Research Laboratory
Psychological Empowerment Among Urban Youth: Measurement Model and Associations with Youth Outcomes
Eisman, Andria B.; Zimmerman, Marc A.; Kruger, Daniel; Reischl, Thomas M.; Miller, Alison L.; Franzen, Susan P.; Morrel-Samuels, Susan
2016-01-01
Empowerment-based strategies have become widely used method to address health inequities and promote social change. Few researchers, however, have tested theoretical models of empowerment, including multidimensional, higher-order models. We test empirically a multidimensional, higher-order model of psychological empowerment (PE), guided by Zimmerman’s (1995) conceptual framework including three components of PE: intrapersonal, interactional and behavioral. We also investigate if PE is associated with positive and negative outcomes among youth. The sample included 367 middle school youth aged 11–16 (M = 12.71; SD = 0.91); 60% female, 32% (n =117) white youth, 46% (n = 170) African-American youth, and 22% (n = 80) identifying as mixed race, Asian-American, Latino, Native American or other ethnic/racial group; schools reported 61–75% free/reduced lunch students. Our results indicated that each of the latent factors for the three PE components demonstrate a good fit with the data. Our results also indicated that these components loaded on to a higher-order PE factor (X2=32.68, df: 22, p=0.07; RMSEA: 0.04, 95% CI: 0.00, 0.06; CFI: 0.99). We found that the second order PE factor was negatively associated with aggressive behavior and positively associated with prosocial engagement. Our results suggest that empowerment-focused programs would benefit from incorporating components addressing how youth think about themselves in relation to their social contexts (intrapersonal), understanding social and material resources needed to achieve specific goals (interactional) and actions taken to influence outcomes (behavioral). Our results also suggest that integrating the three components and promoting PE may help increase likelihood of positive behaviors (e.g., prosocial involvement); we did not find an association between PE and aggressive behavior. Implications and future directions for empowerment research are discussed. PMID:27709632
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Aspects of the Cognitive Model of Physics Problem Solving.
ERIC Educational Resources Information Center
Brekke, Stewart E.
Various aspects of the cognitive model of physics problem solving are discussed in detail including relevant cues, encoding, memory, and input stimuli. The learning process involved in the recognition of familiar and non-familiar sensory stimuli is highlighted. Its four components include selection, acquisition, construction, and integration. The…
NASA Astrophysics Data System (ADS)
Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng
2016-11-01
To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.
Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction
NASA Astrophysics Data System (ADS)
Aarts, Fides; Jonsson, Bengt; Uijen, Johan
In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.
Development of a Rubber-Based Product Using a Mixture Experiment: A Challenging Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaya, Yahya; Piepel, Gregory F.; Caniyilmaz, Erdal
2013-07-01
Many products used in daily life are made by blending two or more components. The properties of such products typically depend on the relative proportions of the components. Experimental design, modeling, and data analysis methods for mixture experiments provide for efficiently determining the component proportions that will yield a product with desired properties. This article presents a case study of the work performed to develop a new rubber formulation for an o-ring (a circular gasket) with requirements specified on 10 product properties. Each step of the study is discussed, including: 1) identifying the objective of the study and requirements formore » properties of the o-ring, 2) selecting the components to vary and specifying the component constraints, 3) constructing a mixture experiment design, 4) measuring the responses and assessing the data, 5) developing property-composition models, 6) selecting the new product formulation, and 7) confirming the selected formulation in manufacturing. The case study includes some challenging and new aspects, which are discussed in the article.« less
The GEMS Model of Volunteer Administration.
ERIC Educational Resources Information Center
Culp, Ken, III; Deppe, Catherine A.; Castillo, Jaime X.; Wells, Betty J.
1998-01-01
Describes GEMS, a spiral model that profiles volunteer administration. Components include Generate, Educate, Mobilize, and Sustain, four sets of processes that span volunteer recruitment and selection to retention or disengagement. (SK)
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Ma, Hsin-I; Hsu, Hsian-He; Juan, Chun-Jung
2018-01-01
We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey's, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey's formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey's formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas.
util_2comp: Planck-based two-component dust model utilities
NASA Astrophysics Data System (ADS)
Meisner, Aaron
2014-11-01
The util_2comp software utilities generate predictions of far-infrared Galactic dust emission and reddening based on a two-component dust emission model fit to Planck HFI, DIRBE and IRAS data from 100 GHz to 3000 GHz. These predictions and the associated dust temperature map have angular resolution of 6.1 arcminutes and are available over the entire sky. Implementations in IDL and Python are included.
Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.
2013-01-01
The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245
Contribution of the GOCE gradiometer components to regional gravity solutions
NASA Astrophysics Data System (ADS)
Naeimi, Majid; Bouman, Johannes
2017-05-01
The contribution of the GOCE gravity gradients to regional gravity field solutions is investigated in this study. We employ radial basis functions to recover the gravity field on regional scales over Amazon and Himalayas as our test regions. In the first step, four individual solutions based on the more accurate gravity gradient components Txx, Tyy, Tzz and Txz are derived. The Tzz component gives better solution than the other single-component solutions despite the less accuracy of Tzz compared to Txx and Tyy. Furthermore, we determine five more solutions based on several selected combinations of the gravity gradient components including a combined solution using the four gradient components. The Tzz and Tyy components are shown to be the main contributors in all combined solutions whereas the Txz adds the least value to the regional gravity solutions. We also investigate the contribution of the regularization term. We show that the contribution of the regularization significantly decreases as more gravity gradients are included. For the solution using all gravity gradients, regularization term contributes to about 5 per cent of the total solution. Finally, we demonstrate that in our test areas, regional gravity modelling based on GOCE data provide more reliable gravity signal in medium wavelengths as compared to pre-GOCE global gravity field models such as the EGM2008.
NASA Astrophysics Data System (ADS)
Di Martino, Gerardo; Iodice, Antonio; Natale, Antonio; Riccio, Daniele; Ruello, Giuseppe
2015-04-01
The recently proposed polarimetric two-scale two- component model (PTSTCM) in principle allows us obtaining a reasonable estimation of the soil moisture even in moderately vegetated areas, where the volumetric scattering contribution is non-negligible, provided that the surface component is dominant and the double-bounce component is negligible. Here we test the PTSTCM validity range by applying it to polarimetric SAR data acquired on areas for which, at the same times of SAR acquisitions, ground measurements of soil moisture were performed. In particular, we employ the AGRISAR'06 database, which includes data from several fields covering a period that spans all the phases of vegetation growth.
Tiezzi, F; de Los Campos, G; Parker Gaddis, K L; Maltecca, C
2017-03-01
Genotype by environment interaction (G × E) in dairy cattle productive traits has been shown to exist, but current genetic evaluation methods do not take this component into account. As several environmental descriptors (e.g., climate, farming system) are known to vary within the United States, not accounting for the G × E could lead to reranking of bulls and loss in genetic gain. Using test-day records on milk yield, somatic cell score, fat, and protein percentage from all over the United States, we computed within herd-year-season daughter yield deviations for 1,087 Holstein bulls and regressed them on genetic and environmental information to estimate variance components and to assess prediction accuracy. Genomic information was obtained from a 50k SNP marker panel. Environmental effect inputs included herd (160 levels), geographical region (7 levels), geographical location (2 variables), climate information (7 variables), and management conditions of the herds (16 total variables divided in 4 subgroups). For each set of environmental descriptors, environmental, genomic, and G × E components were sequentially fitted. Variance components estimates confirmed the presence of G × E on milk yield, with its effect being larger than main genetic effect and the environmental effect for some models. Conversely, G × E was moderate for somatic cell score and small for milk composition. Genotype by environment interaction, when included, partially eroded the genomic effect (as compared with the models where G × E was not included), suggesting that the genomic variance could at least in part be attributed to G × E not appropriately accounted for. Model predictive ability was assessed using 3 cross-validation schemes (new bulls, incomplete progeny test, and new environmental conditions), and performance was compared with a reference model including only the main genomic effect. In each scenario, at least 1 of the models including G × E was able to perform better than the reference model, although it was not possible to find the overall best-performing model that included the same set of environmental descriptors. In general, the methodology used is promising in accounting for G × E in genomic predictions, but challenges exist in identifying a unique set of covariates capable of describing the entire variety of environments. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-01-01
Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-04-01
National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.
Human middle-ear model with compound eardrum and airway branching in mastoid air cells
Keefe, Douglas H.
2015-01-01
An acoustical/mechanical model of normal adult human middle-ear function is described for forward and reverse transmission. The eardrum model included one component bound along the manubrium and another bound by the tympanic cleft. Eardrum components were coupled by a time-delayed impedance. The acoustics of the middle-ear cleft was represented by an acoustical transmission-line model for the tympanic cavity, aditus, antrum, and mastoid air cell system with variable amounts of excess viscothermal loss. Model parameters were fitted to published measurements of energy reflectance (0.25–13 kHz), equivalent input impedance at the eardrum (0.25–11 kHz), temporal-bone pressure in scala vestibuli and scala tympani (0.1–11 kHz), and reverse middle-ear impedance (0.25–8 kHz). Inner-ear fluid motion included cochlear and physiological third-window pathways. The two-component eardrum with time delay helped fit intracochlear pressure responses. A multi-modal representation of the eardrum and high-frequency modeling of the middle-ear cleft helped fit ear-canal responses. Input reactance at the eardrum was small at high frequencies due to multiple modal resonances. The model predicted the middle-ear efficiency between ear canal and cochlea, and the cochlear pressures at threshold. PMID:25994701
Human middle-ear model with compound eardrum and airway branching in mastoid air cells.
Keefe, Douglas H
2015-05-01
An acoustical/mechanical model of normal adult human middle-ear function is described for forward and reverse transmission. The eardrum model included one component bound along the manubrium and another bound by the tympanic cleft. Eardrum components were coupled by a time-delayed impedance. The acoustics of the middle-ear cleft was represented by an acoustical transmission-line model for the tympanic cavity, aditus, antrum, and mastoid air cell system with variable amounts of excess viscothermal loss. Model parameters were fitted to published measurements of energy reflectance (0.25-13 kHz), equivalent input impedance at the eardrum (0.25-11 kHz), temporal-bone pressure in scala vestibuli and scala tympani (0.1-11 kHz), and reverse middle-ear impedance (0.25-8 kHz). Inner-ear fluid motion included cochlear and physiological third-window pathways. The two-component eardrum with time delay helped fit intracochlear pressure responses. A multi-modal representation of the eardrum and high-frequency modeling of the middle-ear cleft helped fit ear-canal responses. Input reactance at the eardrum was small at high frequencies due to multiple modal resonances. The model predicted the middle-ear efficiency between ear canal and cochlea, and the cochlear pressures at threshold.
NASA Astrophysics Data System (ADS)
Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.
2016-12-01
Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision. The case-study example demonstrates the value of exploring multiple working hypotheses, in the form of multiple alternative model components.
A Leisure Activities Curricular Component for Severely Handicapped Youth: Why and How.
ERIC Educational Resources Information Center
Voeltz, Luanna M.; Apffel, James A.
1981-01-01
A rationale for including a leisure time activities curriculum component in educational programing for severely handicapped individuals is presented. The importance of play and the constructive use of leisure time is described through the use of a model demonstration project. (JN)
Gender and the Development of Wisdom.
ERIC Educational Resources Information Center
Orwoll, Lucinda; Achenbaum, W. Andrew
1993-01-01
Drawing on a model of wisdom that includes components in three domains (personality, cognition, and conation) and across three levels (intrapersonal, interpersonal, and transpersonal), highlights potential differences in the ways women and men attain and express wisdom; and examines interactive patterns across the components of wisdom. (BC)
Aerodynamic Characteristics of Two Waverider-Derived Hypersonic Cruise Configurations
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.; Huebner, Lawrence D.; Finley, Dennis B.
1996-01-01
An evaluation was made on the effects of integrating the required aircraft components with hypersonic high-lift configurations known as waveriders to create hypersonic cruise vehicles. Previous studies suggest that waveriders offer advantages in aerodynamic performance and propulsion/airframe integration (PAI) characteristics over conventional non-waverider hypersonic shapes. A wind-tunnel model was developed that integrates vehicle components, including canopies, engine components, and control surfaces, with two pure waverider shapes, both conical-flow-derived waveriders for a design Mach number of 4.0. Experimental data and limited computational fluid dynamics (CFD) solutions were obtained over a Mach number range of 1.6 to 4.63. The experimental data show the component build-up effects and the aerodynamic characteristics of the fully integrated configurations, including control surface effectiveness. The aerodynamic performance of the fully integrated configurations is not comparable to that of the pure waverider shapes, but is comparable to previously tested hypersonic models. Both configurations exhibit good lateral-directional stability characteristics.
NASA Technical Reports Server (NTRS)
Strahler, Alan H.; Li, Xiao-Wen; Jupp, David L. B.
1991-01-01
The bidirectional radiance or reflectance of a forest or woodland can be modeled using principles of geometric optics and Boolean models for random sets in a three dimensional space. This model may be defined at two levels, the scene includes four components; sunlight and shadowed canopy, and sunlit and shadowed background. The reflectance of the scene is modeled as the sum of the reflectances of the individual components as weighted by their areal proportions in the field of view. At the leaf level, the canopy envelope is an assemblage of leaves, and thus the reflectance is a function of the areal proportions of sunlit and shadowed leaf, and sunlit and shadowed background. Because the proportions of scene components are dependent upon the directions of irradiance and exitance, the model accounts for the hotspot that is well known in leaf and tree canopies.
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
The infinitesimal model: Definition, derivation, and implications.
Barton, N H; Etheridge, A M; Véber, A
2017-12-01
Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed within families as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. Thus, the variance that segregates within families is not perturbed by selection, and can be predicted from the variance components. This does not necessarily imply that the trait distribution across the whole population should be Gaussian, and indeed selection or population structure may have a substantial effect on the overall trait distribution. One of our main aims is to identify some general conditions on the allelic effects for the infinitesimal model to be accurate. We first review the long history of the infinitesimal model in quantitative genetics. Then we formulate the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, …. We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. We prove in particular that, within each family, the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order 1∕M. Simulations suggest that in some cases the convergence may be as fast as 1∕M. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Xue, Yang; Yang, Zhongyang; Wang, Xiaoyan; Lin, Zhipan; Li, Dunxi; Su, Shaofeng
2016-01-01
Casuarina equisetifolia is commonly planted and used in the construction of coastal shelterbelt protection in Hainan Island. Thus, it is critical to accurately estimate the tree biomass of Casuarina equisetifolia L. for forest managers to evaluate the biomass stock in Hainan. The data for this work consisted of 72 trees, which were divided into three age groups: young forest, middle-aged forest, and mature forest. The proportion of biomass from the trunk significantly increased with age (P<0.05). However, the biomass of the branch and leaf decreased, and the biomass of the root did not change. To test whether the crown radius (CR) can improve biomass estimates of C. equisetifolia, we introduced CR into the biomass models. Here, six models were used to estimate the biomass of each component, including the trunk, the branch, the leaf, and the root. In each group, we selected one model among these six models for each component. The results showed that including the CR greatly improved the model performance and reduced the error, especially for the young and mature forests. In addition, to ensure biomass additivity, the selected equation for each component was fitted as a system of equations using seemingly unrelated regression (SUR). The SUR method not only gave efficient and accurate estimates but also achieved the logical additivity. The results in this study provide a robust estimation of tree biomass components and total biomass over three groups of C. equisetifolia.
Xue, Yang; Yang, Zhongyang; Wang, Xiaoyan; Lin, Zhipan; Li, Dunxi; Su, Shaofeng
2016-01-01
Casuarina equisetifolia is commonly planted and used in the construction of coastal shelterbelt protection in Hainan Island. Thus, it is critical to accurately estimate the tree biomass of Casuarina equisetifolia L. for forest managers to evaluate the biomass stock in Hainan. The data for this work consisted of 72 trees, which were divided into three age groups: young forest, middle-aged forest, and mature forest. The proportion of biomass from the trunk significantly increased with age (P<0.05). However, the biomass of the branch and leaf decreased, and the biomass of the root did not change. To test whether the crown radius (CR) can improve biomass estimates of C. equisetifolia, we introduced CR into the biomass models. Here, six models were used to estimate the biomass of each component, including the trunk, the branch, the leaf, and the root. In each group, we selected one model among these six models for each component. The results showed that including the CR greatly improved the model performance and reduced the error, especially for the young and mature forests. In addition, to ensure biomass additivity, the selected equation for each component was fitted as a system of equations using seemingly unrelated regression (SUR). The SUR method not only gave efficient and accurate estimates but also achieved the logical additivity. The results in this study provide a robust estimation of tree biomass components and total biomass over three groups of C. equisetifolia. PMID:27002822
Reading component skills of learners in adult basic education.
MacArthur, Charles A; Konold, Timothy R; Glutting, Joseph J; Alamprese, Judith A
2010-01-01
The purposes of this study were to investigate the reliability and construct validity of measures of reading component skills with a sample of adult basic education (ABE) learners, including both native and nonnative English speakers, and to describe the performance of those learners on the measures. Investigation of measures of reading components is needed because available measures were neither developed for nor normed on ABE populations or with nonnative speakers of English. The study included 486 students, 334 born or educated in the United States (native) and 152 not born or educated in the United States (nonnative) but who spoke English well enough to participate in English reading classes. All students had scores on 11 measures covering five constructs: decoding, word recognition, spelling, fluency, and comprehension. Confirmatory factor analysis (CFA) was used to test three models: a two-factor model with print and meaning factors; a three-factor model that separated out a fluency factor; and a five-factor model based on the hypothesized constructs. The five-factor model fit best. In addition, the CFA model fit both native and nonnative populations equally well without modification, showing that the tests measure the same constructs with the same accuracy for both groups. Group comparisons found no difference between the native and nonnative samples on word recognition, but the native sample scored higher on fluency and comprehension and lower on decoding than did the nonnative sample. Students with self-reported learning disabilities scored lower on all reading components. Differences by age and gender were also analyzed.
Nedrelow, David S; Bankwala, Danesh; Hyypio, Jeffrey D; Lai, Victor K; Barocas, Victor H
2018-05-01
The mechanical behavior of collagen-fibrin (col-fib) co-gels is both scientifically interesting and clinically relevant. Collagen-fibrin networks are a staple of tissue engineering research, but the mechanical consequences of changes in co-gel composition have remained difficult to predict or even explain. We previously observed fundamental differences in failure behavior between collagen-rich and fibrin-rich co-gels, suggesting an essential change in how the two components interact as the co-gel's composition changes. In this work, we explored the hypothesis that the co-gel behavior is due to a lack of percolation by the dilute component. We generated a series of computational models based on interpenetrating fiber networks. In these models, the major network component percolated the model space but the minor component did not, instead occupying a small island embedded within the larger network. Each component was assigned properties based on a fit of single-component gel data. Island size was varied to match the relative concentrations of the two components. The model predicted that networks rich in collagen, the stiffer component, would roughly match pure-collagen gel behavior with little additional stress due to the fibrin, as seen experimentally. For fibrin-rich gels, however, the model predicted a smooth increase in the overall network strength with added collagen, as seen experimentally but not consistent with an additive parallel model. We thus conclude that incomplete percolation by the low-concentration component of a co-gel is a major determinant of its macroscopic properties, especially if the low-concentration component is the stiffer component. Models for the behavior of fibrous networks have useful applications in many different fields, including polymer science, textiles, and tissue engineering. In addition to being important structural components in soft tissues and blood clots, these protein networks can serve as scaffolds for bioartificial tissues. Thus, their mechanical behavior, especially in co-gels, is both interesting from a materials science standpoint and significant with regard to tissue engineering. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Construction of a Cyber Attack Model for Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varuttamaseni, Athi; Bari, Robert A.; Youngblood, Robert
The consideration of how one compromised digital equipment can impact neighboring equipment is critical to understanding the progression of cyber attacks. The degree of influence that one component may have on another depends on a variety of factors, including the sharing of resources such as network bandwidth or processing power, the level of trust between components, and the inclusion of segmentation devices such as firewalls. The interactions among components via mechanisms that are unique to the digital world are not usually considered in traditional PRA. This means potential sequences of events that may occur during an attack may be missedmore » if one were to only look at conventional accident sequences. This paper presents a method where, starting from the initial attack vector, the progression of a cyber attack can be modeled. The propagation of the attack is modeled by considering certain attributes of the digital components in the system. These attributes determine the potential vulnerability of a component to a class of attack and the capability gained by the attackers once they are in control of the equipment. The use of attributes allows similar components (components with the same set of attributes) to be modeled in the same way, thereby reducing the computing resources required for analysis of large systems.« less
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
An integrated framework for modeling freight mode and route choice.
DOT National Transportation Integrated Search
2013-10-01
A number of statewide travel demand models have included freight as a separate component in analysis. Unlike : passenger travel, freight has not gained equivalent attention because of lack of data and difficulties in modeling. In : the current state ...
Component, Context, and Manufacturing Model Library (C2M2L)
2012-11-01
123 5.1 MML Population and Web Service Interface...104 Table 41. Relevant Questions with Associated Web Services...the models, and implementing web services that provide semantically aware programmatic access to the models, including implementing the MS&T
A Generic Modeling Process to Support Functional Fault Model Development
NASA Technical Reports Server (NTRS)
Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.
2016-01-01
Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-01-11
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, Jacob G.
2013-07-01
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a changemore » in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOHNaAl(OH){sub 4}-H{sub 2}O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H{sub 2}O, NaOH, and NaAl(OH){sub 4} are determined. The equivalence of the CSLM and the graphical method is verified by comparing results determined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components. (authors)« less
Project Super Heart--Year One.
ERIC Educational Resources Information Center
Bellardini, Harry; And Others
1980-01-01
A model cardiovascular disease prevention program for young children is described. Components include physical examinations, health education (anatomy and physiology of the cardiovascular system), nutrition instruction, first aid techniques, role modeling, and environmental engineering. (JN)
Beyond Microskills: Toward a Model of Counseling Competence
ERIC Educational Resources Information Center
Ridley, Charles R.; Mollen, Debra; Kelly, Shannon M.
2011-01-01
Heeding the call to the profession, the authors present both a definition and model of counseling competence. Undergirding the model are 15 foundational principles. The authors conceptualize counseling competence as more complex and nuanced than do traditional microskills models and include cognitive, affective, and behavioral components. The…
Elbeik, Tarek; Dalessandro, Ralph; Loftus, Richard A; Beringer, Scott
2007-11-01
Comparative cost models were developed to assess cost-per-reportable result and annual costs for HIV-1 and HCV bDNA and AmpliPrep/TaqMan Test (PCR). Model cost components included kit, disposables, platform and related equipment, equipment service plan, equipment maintenance, equipment footprint, waste and labor. Model assessment was most cost-effective when run by bDNA with 36 or more clinical samples and PCR with 30 or fewer clinical samples. Lower costs are attained with maximum samples (84-168) run daily. Highest cost contributors include kit, platform and PCR proprietary disposables. Understanding component costs and the most economic use of HIV-1 and HCV viral load will aid in attaining lowest costs through selection of the appropriate assay and effective negotiations.
NASA Astrophysics Data System (ADS)
Thomas, W. A.; McAnally, W. H., Jr.
1985-07-01
TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.
Probabilistic analysis for fatigue strength degradation of materials
NASA Technical Reports Server (NTRS)
Royce, Lola
1989-01-01
This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.
NASA Technical Reports Server (NTRS)
Johnson, Barry
1992-01-01
The topics covered include the following: (1) CO2 laser kinetics modeling; (2) gas lifetimes in pulsed CO2 lasers; (3) frequency chirp and laser pulse spectral analysis; (4) LAWS A' Design Study; and (5) discharge circuit components for LAWS. The appendices include LAWS Memos, computer modeling of pulsed CO2 lasers for lidar applications, discharge circuit considerations for pulsed CO2 lidars, and presentation made at the Code RC Review.
School nurse summer institute: a model for professional development.
Neighbors, Marianne; Barta, Kathleen
2004-06-01
The components of a professional development model designed to empower school nurses to become leaders in school health services is described. The model was implemented during a 3-day professional development institute that included clinical and leadership components, especially coalition building, with two follow-up sessions in the fall and spring. Coalition building is an important tool to enhance the influence of the school nurse in improving the health of individuals, families, and communities. School nurses and nursing educators with expertise in the specialty of school nursing could replicate this model in their own regions.
Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C
2013-03-01
Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.
Revealing the microstructure of the giant component in random graph ensembles
NASA Astrophysics Data System (ADS)
Tishby, Ido; Biham, Ofer; Katzav, Eytan; Kühn, Reimer
2018-04-01
The microstructure of the giant component of the Erdős-Rényi network and other configuration model networks is analyzed using generating function methods. While configuration model networks are uncorrelated, the giant component exhibits a degree distribution which is different from the overall degree distribution of the network and includes degree-degree correlations of all orders. We present exact analytical results for the degree distributions as well as higher-order degree-degree correlations on the giant components of configuration model networks. We show that the degree-degree correlations are essential for the integrity of the giant component, in the sense that the degree distribution alone cannot guarantee that it will consist of a single connected component. To demonstrate the importance and broad applicability of these results, we apply them to the study of the distribution of shortest path lengths on the giant component, percolation on the giant component, and spectra of sparse matrices defined on the giant component. We show that by using the degree distribution on the giant component one obtains high quality results for these properties, which can be further improved by taking the degree-degree correlations into account. This suggests that many existing methods, currently used for the analysis of the whole network, can be adapted in a straightforward fashion to yield results conditioned on the giant component.
ERIC Educational Resources Information Center
Gravina, Nicole E.; Siers, Brian P.
2011-01-01
Models of comprehensive Performance Management systems include both employee development and evaluative components. The Organizational Behavior Management discipline focuses almost exclusively on the developmental component, while the Industrial and Organizational Psychology discipline is focused on use of performance appraisals. Performance…
Integrated Evaluation of Closed Loop Air Revitalization System Components
NASA Technical Reports Server (NTRS)
Murdock, K.
2010-01-01
NASA s vision and mission statements include an emphasis on human exploration of space, which requires environmental control and life support technologies. This Contractor Report (CR) describes the development and evaluation of an Air Revitalization System, modeling and simulation of the components, and integrated hardware testing with the goal of better understanding the inherent capabilities and limitations of this closed loop system. Major components integrated and tested included a 4-Bed Modular Sieve, Mechanical Compressor Engineering Development Unit, Temperature Swing Adsorption Compressor, and a Sabatier Engineering and Development Unit. The requisite methodolgy and technical results are contained in this CR.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
NASA Astrophysics Data System (ADS)
Abdul-Aziz, Ali; Woike, Mark R.; Clem, Michelle; Baaklini, George Y.
2014-04-01
Generally, rotating engine components undergo high centrifugal loading environment which subject them to various types of failure initiation mechanisms. Health monitoring of these components is a necessity and is often challenging to implement. This is primarily due to numerous factors including the presence of scattered loading conditions, flaw sizes, component geometry and materials properties, all which hinder the simplicity of applying health monitoring applications. This paper represents a summary work of combined experimental and analytical modeling that included data collection from a spin test experiment of a rotor disk addressing the aforementioned durability issues. It further covers presentation of results obtained from a finite element modeling study to characterize the structural durability of a cracked rotor as it relates to the experimental findings. The experimental data include blade tip clearance, blade tip timing and shaft displacement measurements. The tests were conducted at the NASA Glenn Research Center's Rotordynamics Laboratory, a high precision spin rig. The results are evaluated and examined to determine their significance on the development of a health monitoring system to pre-predict cracks and other anomalies and to assist in initiating a supplemental physics based fault prediction analytical model.
Validation of the Fully-Coupled Air-Sea-Wave COAMPS System
NASA Astrophysics Data System (ADS)
Smith, T.; Campbell, T. J.; Chen, S.; Gabersek, S.; Tsu, J.; Allard, R. A.
2017-12-01
A fully-coupled, air-sea-wave numerical model, COAMPS®, has been developed by the Naval Research Laboratory to further enhance understanding of oceanic, atmospheric, and wave interactions. The fully-coupled air-sea-wave system consists of an atmospheric component with full physics parameterizations, an ocean model, NCOM (Navy Coastal Ocean Model), and two wave components, SWAN (Simulating Waves Nearshore) and WaveWatch III. Air-sea interactions between the atmosphere and ocean components are accomplished through bulk flux formulations of wind stress and sensible and latent heat fluxes. Wave interactions with the ocean include the Stokes' drift, surface radiation stresses, and enhancement of the bottom drag coefficient in shallow water due to the wave orbital velocities at the bottom. In addition, NCOM surface currents are provided to SWAN and WaveWatch III to simulate wave-current interaction. The fully-coupled COAMPS system was executed for several regions at both regional and coastal scales for the entire year of 2015, including the U.S. East Coast, Western Pacific, and Hawaii. Validation of COAMPS® includes observational data comparisons and evaluating operational performance on the High Performance Computing (HPC) system for each of these regions.
Marine mammals' influence on ecosystem processes affecting fisheries in the Barents Sea is trivial.
Corkeron, Peter J
2009-04-23
Some interpretations of ecosystem-based fishery management include culling marine mammals as an integral component. The current Norwegian policy on marine mammal management is one example. Scientific support for this policy includes the Scenario Barents Sea (SBS) models. These modelled interactions between cod, Gadus morhua, herring, Clupea harengus, capelin, Mallotus villosus and northern minke whales, Balaenoptera acutorostrata. Adding harp seals Phoca groenlandica into this top-down modelling approach resulted in unrealistic model outputs. Another set of models of the Barents Sea fish-fisheries system focused on interactions within and between the three fish populations, fisheries and climate. These model key processes of the system successfully. Continuing calls to support the SBS models despite their failure suggest a belief that marine mammal predation must be a problem for fisheries. The best available scientific evidence provides no justification for marine mammal culls as a primary component of an ecosystem-based approach to managing the fisheries of the Barents Sea.
Influence of Natural Convection and Thermal Radiation Multi-Component Transport in MOCVD Reactors
NASA Technical Reports Server (NTRS)
Lowry, S.; Krishnan, A.; Clark, I.
1999-01-01
The influence of Grashof and Reynolds number in Metal Organic Chemical Vapor (MOCVD) reactors is being investigated under a combined empirical/numerical study. As part of that research, the deposition of Indium Phosphide in an MOCVD reactor is modeled using the computational code CFD-ACE. The model includes the effects of convection, conduction, and radiation as well as multi-component diffusion and multi-step surface/gas phase chemistry. The results of the prediction are compared with experimental data for a commercial reactor and analyzed with respect to the model accuracy.
Theoretical models for coronary vascular biomechanics: Progress & challenges
Waters, Sarah L.; Alastruey, Jordi; Beard, Daniel A.; Bovendeerd, Peter H.M.; Davies, Peter F.; Jayaraman, Girija; Jensen, Oliver E.; Lee, Jack; Parker, Kim H.; Popel, Aleksander S.; Secomb, Timothy W.; Siebes, Maria; Sherwin, Spencer J.; Shipley, Rebecca J.; Smith, Nicolas P.; van de Vosse, Frans N.
2013-01-01
A key aim of the cardiac Physiome Project is to develop theoretical models to simulate the functional behaviour of the heart under physiological and pathophysiological conditions. Heart function is critically dependent on the delivery of an adequate blood supply to the myocardium via the coronary vasculature. Key to this critical function of the coronary vasculature is system dynamics that emerge via the interactions of the numerous constituent components at a range of spatial and temporal scales. Here, we focus on several components for which theoretical approaches can be applied, including vascular structure and mechanics, blood flow and mass transport, flow regulation, angiogenesis and vascular remodelling, and vascular cellular mechanics. For each component, we summarise the current state of the art in model development, and discuss areas requiring further research. We highlight the major challenges associated with integrating the component models to develop a computational tool that can ultimately be used to simulate the responses of the coronary vascular system to changing demands and to diseases and therapies. PMID:21040741
NASA Astrophysics Data System (ADS)
Bai, Hao; Zhang, Xi-wen
2017-06-01
While Chinese is learned as a second language, its characters are taught step by step from their strokes to components, radicals to components, and their complex relations. Chinese Characters in digital ink from non-native language writers are deformed seriously, thus the global recognition approaches are poorer. So a progressive approach from bottom to top is presented based on hierarchical models. Hierarchical information includes strokes and hierarchical components. Each Chinese character is modeled as a hierarchical tree. Strokes in one Chinese characters in digital ink are classified with Hidden Markov Models and concatenated to the stroke symbol sequence. And then the structure of components in one ink character is extracted. According to the extraction result and the stroke symbol sequence, candidate characters are traversed and scored. Finally, the recognition candidate results are listed by descending. The method of this paper is validated by testing 19815 copies of the handwriting Chinese characters written by foreign students.
Parent Education within a Relationship-Focused Model.
ERIC Educational Resources Information Center
Kelly, Jean F.; Barnard, Kathryn E.
1999-01-01
This response to Mahoney et al. (EC 623 392) agrees that parent education should be an important component of early intervention programs and proposes that parent education be included in a relationship-focused early-intervention model. This model is illustrated, explained, and compared with the previous child-focused model and the current…
A Bayesian Semiparametric Latent Variable Model for Mixed Responses
ERIC Educational Resources Information Center
Fahrmeir, Ludwig; Raach, Alexander
2007-01-01
In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…
Vector wind profile gust model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1981-01-01
To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Electric-hybrid-vehicle simulation
NASA Astrophysics Data System (ADS)
Pasma, D. C.
The simulation of electric hybrid vehicles is to be performed using experimental data to model propulsion system components. The performance of an existing ac propulsion system will be used as the baseline for comparative purposes. Hybrid components to be evaluated include electrically and mechanically driven flywheels, and an elastomeric regenerative braking system.
Cut Costs with Thin Client Computing.
ERIC Educational Resources Information Center
Hartley, Patrick H.
2001-01-01
Discusses how school districts can considerably increase the number of administrative computers in their districts without a corresponding increase in costs by using the "Thin Client" component of the Total Cost of Ownership (TCC) model. TCC and Thin Client are described, including its software and hardware components. An example of a…
The Comprehensive Career Education System: System Administrators Component K-12.
ERIC Educational Resources Information Center
Educational Properties Inc., Irvine, CA.
Using the example of a Career Education Model developed by the Orange County, California Consortium, the document provides guidelines for setting up career education programs in local educational agencies. Component levels, a definition of career education, and Consortium program background are discussed. Subsequent chapters include: Program…
Abby, Sophie S.; Néron, Bertrand; Ménager, Hervé; Touchon, Marie; Rocha, Eduardo P. C.
2014-01-01
Motivation Biologists often wish to use their knowledge on a few experimental models of a given molecular system to identify homologs in genomic data. We developed a generic tool for this purpose. Results Macromolecular System Finder (MacSyFinder) provides a flexible framework to model the properties of molecular systems (cellular machinery or pathway) including their components, evolutionary associations with other systems and genetic architecture. Modelled features also include functional analogs, and the multiple uses of a same component by different systems. Models are used to search for molecular systems in complete genomes or in unstructured data like metagenomes. The components of the systems are searched by sequence similarity using Hidden Markov model (HMM) protein profiles. The assignment of hits to a given system is decided based on compliance with the content and organization of the system model. A graphical interface, MacSyView, facilitates the analysis of the results by showing overviews of component content and genomic context. To exemplify the use of MacSyFinder we built models to detect and class CRISPR-Cas systems following a previously established classification. We show that MacSyFinder allows to easily define an accurate “Cas-finder” using publicly available protein profiles. Availability and Implementation MacSyFinder is a standalone application implemented in Python. It requires Python 2.7, Hmmer and makeblastdb (version 2.2.28 or higher). It is freely available with its source code under a GPLv3 license at https://github.com/gem-pasteur/macsyfinder. It is compatible with all platforms supporting Python and Hmmer/makeblastdb. The “Cas-finder” (models and HMM profiles) is distributed as a compressed tarball archive as Supporting Information. PMID:25330359
Modelling of the Thermo-Physical and Physical Properties for Solidification of Al-Alloys
NASA Astrophysics Data System (ADS)
Saunders, N.; Li, X.; Miodownik, A. P.; Schillé, J.-P.
The thermo-physical and physical properties of the liquid and solid phases are critical components in casting simulations. Such properties include the fraction solid transformed, enthalpy release, thermal conductivity, volume and density, all as a function of temperature. Due to the difficulty in experimentally determining such properties at solidification temperatures, little information exists for multi-component alloys. As part of the development of a new computer program for modelling of materials properties (JMatPro) extensive work has been carried out on the development of sound, physically based models for these properties. Wide ranging results will presented for Al-based alloys, which will include more detailed information concerning the density change of the liquid that intrinsically occurs during solidification due to its change in composition.
Free-Piston Stirling Convertor Controller Development at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Regan, Timothy
2004-01-01
The free-piston Stirling convertor end-to-end modeling effort at NASA Glenn Research Center (GRC) has produced a software-based test bed in which free-piston Stirling convertors can be simulated and evaluated. The simulation model includes all the components of the convertor - the Stirling cycle engine, linear alternator, controller, and load. This paper is concerned with controllers. It discusses three controllers that have been studied using this model. Case motion has been added to the model recently so that effects of differences between convertor components can be simulated and ameliorative control engineering techniques can be developed. One concern when applying a system comprised of interconnected mass-spring-damper components is to prevent operation in any but the intended mode. The design mode is the only desired mode of operation, but all other modes are considered in controller design.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
At the request of the US EPA Oil Program Center, ERD is developing an oil spill model that focuses on fate and transport of oil components under various response scenarios. This model includes various simulation options, including the use of chemical dispersing agents on oil sli...
NASA Technical Reports Server (NTRS)
Drysdale, Alan; Thomas, Mark; Fresa, Mark; Wheeler, Ray
1992-01-01
Controlled Ecological Life Support System (CELSS) technology is critical to the Space Exploration Initiative. NASA's Kennedy Space Center has been performing CELSS research for several years, developing data related to CELSS design. We have developed OCAM (Object-oriented CELSS Analysis and Modeling), a CELSS modeling tool, and have used this tool to evaluate CELSS concepts, using this data. In using OCAM, a CELSS is broken down into components, and each component is modeled as a combination of containers, converters, and gates which store, process, and exchange carbon, hydrogen, and oxygen on a daily basis. Multiple crops and plant types can be simulated. Resource recovery options modeled include combustion, leaching, enzyme treatment, aerobic or anaerobic digestion, and mushroom and fish growth. Results include printouts and time-history graphs of total system mass, biomass, carbon dioxide, and oxygen quantities; energy consumption; and manpower requirements. The contributions of mass, energy, and manpower to system cost have been analyzed to compare configurations and determine appropriate research directions.
United States Air Force Research Initiation Program for 1987. Volume 1
1989-04-01
complexity for analyzing such models depends upon the repair or replace- ment times distributions, the repair policy for damaged components and a...distributions, repair policy for various comDonents and a number of other factors. Problems o interest for such models include the determinations of (a...Thus. some more assumption is needed as to the order in which repair is to be made when more than one component is damaged. We will adopt a policy
An Elastic Model of Blebbing in Nuclear Lamin Meshworks
NASA Astrophysics Data System (ADS)
Funkhouser, Chloe; Sknepnek, Rastko; Shimi, Takeshi; Goldman, Anne; Goldman, Robert; Olvera de La Cruz, Monica
2013-03-01
A two-component continuum elastic model is introduced to analyze a nuclear lamin meshwork, a structural element of the lamina of the nuclear envelope. The main component of the lamina is a meshwork of lamin protein filaments providing mechanical support to the nucleus and also playing a role in gene expression. Abnormalities in nuclear shape are associated with a variety of pathologies, including some forms of cancer and Hutchinson-Gilford progeria syndrome, and are often characterized by protruding structures termed nuclear blebs. Nuclear blebs are rich in A-type lamins and may be related to pathological gene expression. We apply the two-dimensional elastic shell model to determine which characteristics of the meshwork could be responsible for blebbing, including heterogeneities in the meshwork thickness and mesh size. We find that if one component of the lamin meshwork, rich in A-type lamins, has a tendency to form a larger mesh size than that rich in B-type lamins, this is sufficient to cause segregation of the lamin components and also to form blebs rich in A-type lamins. The model produces structures with comparable morphologies and mesh size distributions as the lamin meshworks of real, pathological nuclei. Funded by US DoE Award DEFG02-08ER46539 and by the DDR&E and AFOSR under Award FA9550-10-1-0167; simulations performed on NU Quest cluster
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.
Kent, Shawn; Wanzek, Jeanne; Petscher, Yaacov; Al Otaiba, Stephanie; Kim, Young-Suk
2013-01-01
In the present study, we examined the influence of kindergarten component skills on writing outcomes, both concurrently and longitudinally to first grade. Using data from 265 students, we investigated a model of writing development including attention regulation along with students’ reading, spelling, handwriting fluency, and oral language component skills. Results from structural equation modeling demonstrated that a model including attention was better fitting than a model with only language and literacy factors. Attention, a higher-order literacy factor related to reading and spelling proficiency, and automaticity in letter-writing were uniquely and positively related to compositional fluency in kindergarten. Attention and higher-order literacy factor were predictive of both composition quality and fluency in first grade, while oral language showed unique relations with first grade writing quality. Implications for writing development and instruction are discussed. PMID:25132722
Material Model Evaluation of a Composite Honeycomb Energy Absorber
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.
2012-01-01
A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., including vehicle simulations using industry standard model (need to add name and location of this open.... All such information and data must include assumptions made in their preparation and the range of... any product (vehicle or component) to be produced by or through the project, including relevant data...
Forecast of long term coal supply and mining conditions: Model documentation and results
NASA Technical Reports Server (NTRS)
1980-01-01
A coal industry model was developed to support the Jet Propulsion Laboratory in its investigation of advanced underground coal extraction systems. The model documentation includes the programming for the coal mining cost models and an accompanying users' manual, and a guide to reading model output. The methodology used in assembling the transportation, demand, and coal reserve components of the model are also described. Results presented for 1986 and 2000, include projections of coal production patterns and marginal prices, differentiated by coal sulfur content.
NASA Technical Reports Server (NTRS)
DiStefano, III, Frank James (Inventor); Wobick, Craig A. (Inventor); Chapman, Kirt Auldwin (Inventor); McCloud, Peter L. (Inventor)
2014-01-01
A thermal fluid system modeler including a plurality of individual components. A solution vector is configured and ordered as a function of one or more inlet dependencies of the plurality of individual components. A fluid flow simulator simulates thermal energy being communicated with the flowing fluid and between first and second components of the plurality of individual components. The simulation extends from an initial time to a later time step and bounds heat transfer to be substantially between the flowing fluid, walls of tubes formed in each of the individual components of the plurality, and between adjacent tubes. Component parameters of the solution vector are updated with simulation results for each of the plurality of individual components of the simulation.
Jet Noise Physics and Modeling Using First-principles Simulations
NASA Technical Reports Server (NTRS)
Freund, Jonathan B.
2003-01-01
An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.
Deformable known component model-based reconstruction for coronary CT angiography
NASA Astrophysics Data System (ADS)
Zhang, X.; Tilley, S.; Xu, S.; Mathews, A.; McVeigh, E. R.; Stayman, J. W.
2017-03-01
Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR - much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.
Discrete event simulation tool for analysis of qualitative models of continuous processing systems
NASA Technical Reports Server (NTRS)
Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)
1990-01-01
An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.
Regulatory principles and experimental approaches to the circadian control of starch turnover
Seaton, Daniel D.; Ebenhöh, Oliver; Millar, Andrew J.; Pokhilko, Alexandra
2014-01-01
In many plants, starch is synthesized during the day and degraded during the night to avoid carbohydrate starvation in darkness. The circadian clock participates in a dynamic adjustment of starch turnover to changing environmental condition through unknown mechanisms. We used mathematical modelling to explore the possible scenarios for the control of starch turnover by the molecular components of the plant circadian clock. Several classes of plausible models were capable of describing the starch dynamics observed in a range of clock mutant plants and light conditions, including discriminating circadian protocols. Three example models of these classes are studied in detail, differing in several important ways. First, the clock components directly responsible for regulating starch degradation are different in each model. Second, the intermediate species in the pathway may play either an activating or inhibiting role on starch degradation. Third, the system may include a light-dependent interaction between the clock and downstream processes. Finally, the clock may be involved in the regulation of starch synthesis. We discuss the differences among the models’ predictions for diel starch profiles and the properties of the circadian regulators. These suggest additional experiments to elucidate the pathway structure, avoid confounding results and identify the molecular components involved. PMID:24335560
Impact of multilayered compression bandages on sub-bandage interface pressure: a model.
Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A
2011-03-01
Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.
Python scripting in the nengo simulator.
Stewart, Terrence C; Tripp, Bryan; Eliasmith, Chris
2009-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models.
Python Scripting in the Nengo Simulator
Stewart, Terrence C.; Tripp, Bryan; Eliasmith, Chris
2008-01-01
Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models. PMID:19352442
Mechanical Analysis of W78/88-1 Life Extension Program Warhead Design Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Nathan
2014-09-01
Life Extension Program (LEP) is a program to repair/replace components of nuclear weapons to ensure the ability to meet military requirements. The W78/88-1 LEP encompasses the modernization of two major nuclear weapon reentry systems into an interoperable warhead. Several design concepts exist to provide different options for robust safety and security themes, maximum non-nuclear commonality, and cost. Simulation is one capability used to evaluate the mechanical performance of the designs in various operational environments, plan for system and component qualification efforts, and provide insight into the survivability of the warhead in environments that are not currently testable. The simulation effortsmore » use several Sandia-developed tools through the Advanced Simulation and Computing program, including Cubit for mesh generation, the DART Model Manager, SIERRA codes running on the HPC TLCC2 platforms, DAKOTA, and ParaView. Several programmatic objectives were met using the simulation capability including: (1) providing early environmental specification estimates that may be used by component designers to understand the severity of the loads their components will need to survive, (2) providing guidance for load levels and configurations for subassembly tests intended to represent operational environments, and (3) recommending design options including modified geometry and material properties. These objectives were accomplished through regular interactions with component, system, and test engineers while using the laboratory's computational infrastructure to effectively perform ensembles of simulations. Because NNSA has decided to defer the LEP program, simulation results are being documented and models are being archived for future reference. However, some advanced and exploratory efforts will continue to mature key technologies, using the results from these and ongoing simulations for design insights, test planning, and model validation.« less
Design of a nickel-hydrogen battery simulator for the NASA EOS testbed
NASA Technical Reports Server (NTRS)
Gur, Zvi; Mang, Xuesi; Patil, Ashok R.; Sable, Dan M.; Cho, Bo H.; Lee, Fred C.
1992-01-01
The hardware and software design of a nickel-hydrogen (Ni-H2) battery simulator (BS) with application to the NASA Earth Observation System (EOS) satellite is presented. The battery simulator is developed as a part of a complete testbed for the EOS satellite power system. The battery simulator involves both hardware and software components. The hardware component includes the capability of sourcing and sinking current at a constant programmable voltage. The software component includes the capability of monitoring the battery's ampere-hours (Ah) and programming the battery voltage according to an empirical model of the nickel-hydrogen battery stored in a computer.
A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils
ERIC Educational Resources Information Center
Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad
2015-01-01
Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…
The Models-3 Community Multi-scale Air Quality (CMAQ) model, first released by the USEPA in 1999 (Byun and Ching. 1999), continues to be developed and evaluated. The principal components of the CMAQ system include a comprehensive emission processor known as the Sparse Matrix O...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.
The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the third year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled) concepts, including the use of multiple coupled reactors at a single site. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor SMR models, ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface (ICHMI) technical area, and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environmentmore » and suite of models are identified as the Modular Dynamic SIMulation (MoDSIM) tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the program, (2) developing a library of baseline component modules that can be assembled into full plant models using existing geometry and thermal-hydraulic data, (3) defining modeling conventions for interconnecting component models, and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less
Covariant spectator theory of np scattering: Deuteron quadrupole moment
Gross, Franz
2015-01-26
The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
Hou, Jiebin; Chen, Wei; Lu, Hongtao; Zhao, Hongxia; Gao, Songyan; Liu, Wenrui; Dong, Xin; Guo, Zhiyong
2018-01-01
Purpose: As a Chinese medicinal herb, Desmodium styracifolium (Osb.) Merr (DS) has been applied clinically to alleviate crystal-induced kidney injuries, but its effective components and their specific mechanisms still need further exploration. This research first combined the methods of network pharmacology and proteomics to explore the therapeutic protein targets of DS on oxalate crystal-induced kidney injuries to provide a reference for relevant clinical use. Methods: Oxalate-induced kidney injury mouse, rat, and HK-2 cell models were established. Proteins differentially expressed between the oxalate and control groups were respectively screened using iTRAQ combined with MALDI-TOF-MS. The common differential proteins of the three models were further analyzed by molecular docking with DS compounds to acquire differential targets. The inverse docking targets of DS were predicted through the platform of PharmMapper. The protein-protein interaction (PPI) relationship between the inverse docking targets and the differential proteins was established by STRING. Potential targets were further validated by western blot based on a mouse model with DS treatment. The effects of constituent compounds, including luteolin, apigenin, and genistein, were investigated based on an oxalate-stimulated HK-2 cell model. Results: Thirty-six common differentially expressed proteins were identified by proteomic analysis. According to previous research, the 3D structures of 15 major constituents of DS were acquired. Nineteen differential targets, including cathepsin D (CTSD), were found using molecular docking, and the component-differential target network was established. Inverse-docking targets including p38 MAPK and CDK-2 were found, and the network of component-reverse docking target was established. Through PPI analysis, 17 inverse-docking targets were linked to differential proteins. The combined network of component-inverse docking target-differential proteins was then constructed. The expressions of CTSD, p-p38 MAPK, and p-CDK-2 were shown to be increased in the oxalate group and decreased in kidney tissue by the DS treatment. Luteolin, apigenin, and genistein could protect oxalate-stimulated tubular cells as active components of DS. Conclusion: The potential targets including the CTSD, p38 MAPK, and CDK2 of DS in oxalate-induced kidney injuries and the active components (luteolin, apigenin, and genistein) of DS were successfully identified in this study by combining proteomics analysis, network pharmacology prediction, and experimental validation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiengarten, T.; Fichtner, H.; Kleimann, J.
2016-12-10
We extend a two-component model for the evolution of fluctuations in the solar wind plasma so that it is fully three-dimensional (3D) and also coupled self-consistently to the large-scale magnetohydrodynamic equations describing the background solar wind. The two classes of fluctuations considered are a high-frequency parallel-propagating wave-like piece and a low-frequency quasi-two-dimensional component. For both components, the nonlinear dynamics is dominanted by quasi-perpendicular spectral cascades of energy. Driving of the fluctuations by, for example, velocity shear and pickup ions is included. Numerical solutions to the new model are obtained using the Cronos framework, and validated against previous simpler models. Comparing results frommore » the new model with spacecraft measurements, we find improved agreement relative to earlier models that employ prescribed background solar wind fields. Finally, the new results for the wave-like and quasi-two-dimensional fluctuations are used to calculate ab initio diffusion mean-free paths and drift lengthscales for the transport of cosmic rays in the turbulent solar wind.« less
NASA Astrophysics Data System (ADS)
Griffies, Stephen M.; Danabasoglu, Gokhan; Durack, Paul J.; Adcroft, Alistair J.; Balaji, V.; Böning, Claus W.; Chassignet, Eric P.; Curchitser, Enrique; Deshayes, Julie; Drange, Helge; Fox-Kemper, Baylor; Gleckler, Peter J.; Gregory, Jonathan M.; Haak, Helmuth; Hallberg, Robert W.; Heimbach, Patrick; Hewitt, Helene T.; Holland, David M.; Ilyina, Tatiana; Jungclaus, Johann H.; Komuro, Yoshiki; Krasting, John P.; Large, William G.; Marsland, Simon J.; Masina, Simona; McDougall, Trevor J.; Nurser, A. J. George; Orr, James C.; Pirani, Anna; Qiao, Fangli; Stouffer, Ronald J.; Taylor, Karl E.; Treguier, Anne Marie; Tsujino, Hiroyuki; Uotila, Petteri; Valdivieso, Maria; Wang, Qiang; Winton, Michael; Yeager, Stephen G.
2016-09-01
The Ocean Model Intercomparison Project (OMIP) is an endorsed project in the Coupled Model Intercomparison Project Phase 6 (CMIP6). OMIP addresses CMIP6 science questions, investigating the origins and consequences of systematic model biases. It does so by providing a framework for evaluating (including assessment of systematic biases), understanding, and improving ocean, sea-ice, tracer, and biogeochemical components of climate and earth system models contributing to CMIP6. Among the WCRP Grand Challenges in climate science (GCs), OMIP primarily contributes to the regional sea level change and near-term (climate/decadal) prediction GCs.OMIP provides (a) an experimental protocol for global ocean/sea-ice models run with a prescribed atmospheric forcing; and (b) a protocol for ocean diagnostics to be saved as part of CMIP6. We focus here on the physical component of OMIP, with a companion paper (Orr et al., 2016) detailing methods for the inert chemistry and interactive biogeochemistry. The physical portion of the OMIP experimental protocol follows the interannual Coordinated Ocean-ice Reference Experiments (CORE-II). Since 2009, CORE-I (Normal Year Forcing) and CORE-II (Interannual Forcing) have become the standard methods to evaluate global ocean/sea-ice simulations and to examine mechanisms for forced ocean climate variability. The OMIP diagnostic protocol is relevant for any ocean model component of CMIP6, including the DECK (Diagnostic, Evaluation and Characterization of Klima experiments), historical simulations, FAFMIP (Flux Anomaly Forced MIP), C4MIP (Coupled Carbon Cycle Climate MIP), DAMIP (Detection and Attribution MIP), DCPP (Decadal Climate Prediction Project), ScenarioMIP, HighResMIP (High Resolution MIP), as well as the ocean/sea-ice OMIP simulations.
Aucar, I Agustín; Gomez, Sergio S; Giribet, Claudia G; Aucar, Gustavo A
2016-08-24
One of the most influential articles showing the best way to get the absolute values of NMR magnetic shieldings, σ (non-measurables) from both accurate measurements and theoretical calculations, was published a long time ago by Flygare. His model was shown to break down when heavy atoms are involved. This fact motivated the development of new theories of nuclear spin-rotation (SR) tensors, which consider electronic relativistic effects. One was published recently by some of us. In this article we take another step further and propose three different models that generalize Flygare's model. All of them are written using four-component relativistic expressions, though the two-component relativistic SO-S term also appears in one. The first clues for these developments were built from the relationship among σ and the SR tensors within the two-component relativistic LRESC model. Besides, we had to introduce a few other well defined assumptions: (i) relativistic corrections must be included in a way to best reproduce the relationship among the (e-e) term (called "paramagnetic" within the non-relativistic domain) of σ and its equivalent part of the SR tensor, (ii) as happens in Flygare's rule, the shielding of free atoms shall be included to improve accuracy. In the highest accurate model, a new term known as Spin-orbit due to spin, SO-S (in this mechanism the spin-Zeeman Hamiltonian replaces the orbital-Zeeman Hamiltonian), is included. We show the results of the application of those models to halogen containing linear molecules.
McSherry, Wilfred
2006-07-01
The aim of this study was to generate a deeper understanding of the factors and forces that may inhibit or advance the concepts of spirituality and spiritual care within both nursing and health care. This manuscript presents a model that emerged from a qualitative study using grounded theory. Implementation and use of this model may assist all health care practitioners and organizations to advance the concepts of spirituality and spiritual care within their own sphere of practice. The model has been termed the principal components model because participants identified six components as being crucial to the advancement of spiritual health care. Grounded theory was used meaning that there was concurrent data collection and analysis. Theoretical sampling was used to develop the emerging theory. These processes, along with data analysis, open, axial and theoretical coding led to the identification of a core category and the construction of the principal components model. Fifty-three participants (24 men and 29 women) were recruited and all consented to be interviewed. The sample included nurses (n=24), chaplains (n=7), a social worker (n=1), an occupational therapist (n=1), physiotherapists (n=2), patients (n=14) and the public (n=4). The investigation was conducted in three phases to substantiate the emerging theory and the development of the model. The principal components model contained six components: individuality, inclusivity, integrated, inter/intra-disciplinary, innate and institution. A great deal has been written on the concepts of spirituality and spiritual care. However, rhetoric alone will not remove some of the intrinsic and extrinsic barriers that are inhibiting the advancement of the spiritual dimension in terms of theory and practice. An awareness of and adherence to the principal components model may assist nurses and health care professionals to engage with and overcome some of the structural, organizational, political and social variables that are impacting upon spiritual care.
Computational model of precision grip in Parkinson's disease: a utility based approach
Gupta, Ankur; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa
2013-01-01
We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373
ERIC Educational Resources Information Center
Andersen, Barbara L.; Golden-Kreutz, Deanna M.; Emery, Charles F.; Thiel, Debora L.
2009-01-01
Trials testing the efficacy of psychological interventions for cancer patients had their beginnings in the 1970s. Since then, hundreds of trials have found interventions to be generally efficacious. In this article, we describe an intervention grounded in a conceptual model that includes psychological, behavioral, and biological components. It is…
WNDCOM: estimating surface winds in mountainous terrain
Bill C. Ryan
1983-01-01
WNDCOM is a mathematical model for estimating surface winds in mountainous terrain. By following the procedures described, the sheltering and diverting effect of terrain, the individual components of the windflow, and the surface wind in remote mountainous areas can be estimated. Components include the contribution from the synoptic scale pressure gradient, the sea...
A Method to Capture Macroslip at Bolted Interfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Ronald Neil; Heitman, Lili Anne Akin
2015-10-01
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less
A Method to Capture Macroslip at Bolted Interfaces [PowerPoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopkins, Ronald Neil; Heitman, Lili Anne Akin
2016-01-01
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includesmore » both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.« less
Psychological Empowerment Among Urban Youth: Measurement Model and Associations with Youth Outcomes.
Eisman, Andria B; Zimmerman, Marc A; Kruger, Daniel; Reischl, Thomas M; Miller, Alison L; Franzen, Susan P; Morrel-Samuels, Susan
2016-12-01
Empowerment-based strategies have become widely used method to address health inequities and promote social change. Few researchers, however, have tested theoretical models of empowerment, including multidimensional, higher-order models. We test empirically a multidimensional, higher-order model of psychological empowerment (PE), guided by Zimmerman's conceptual framework including three components of PE: intrapersonal, interactional, and behavioral. We also investigate if PE is associated with positive and negative outcomes among youth. The sample included 367 middle school youth aged 11-16 (M = 12.71; SD = 0.91); 60% female, 32% (n = 117) white youth, 46% (n = 170) African-American youth, and 22% (n = 80) identifying as mixed race, Asian-American, Latino, Native American, or other ethnic/racial group; schools reported 61-75% free/reduced lunch students. Our results indicated that each of the latent factors for the three PE components demonstrate a good fit with the data. Our results also indicated that these components loaded on to a higher-order PE factor (X 2 = 32.68; df: 22; p = .07; RMSEA: 0.04; 95% CI: .00, .06; CFI: 0.99). We found that the second-order PE factor was negatively associated with aggressive behavior and positively associated with prosocial engagement. Our results suggest that empowerment-focused programs would benefit from incorporating components addressing how youth think about themselves in relation to their social contexts (intrapersonal), understanding social and material resources needed to achieve specific goals (interactional), and actions taken to influence outcomes (behavioral). Our results also suggest that integrating the three components and promoting PE may help increase likelihood of positive behaviors (e.g., prosocial involvement); we did not find an association between PE and aggressive behavior. Implications and future directions for empowerment research are discussed. © Society for Community Research and Action 2016.
Detailed Post-Soft Impact Progressive Damage Assessment for Hybrid Structure Jet Engines
NASA Technical Reports Server (NTRS)
Siddens, Aaron; Bayandor, Javid; Celestina, Mark L.
2014-01-01
Currently, certification of engine designs for resistance to bird strike is reliant on physical tests. Predictive modeling of engine structural damage has mostly been limited to evaluation of individual forward section components, such as fan blades within a fixed frame of reference, to direct impact with a bird. Such models must be extended to include interactions among engine components under operating conditions to evaluate the full extent of engine damage. This paper presents the results of a study aim to develop a methodology for evaluating bird strike damage in advanced propulsion systems incorporating hybrid composite/metal structures. The initial degradation and failure of individual fan blades struck by a bird were investigated. Subsequent damage to other fan blades and engine components due to resultant violent fan assembly vibrations and fragmentation was further evaluated. Various modeling parameters for the bird and engine components were investigated to determine guidelines for accurately capturing initial damage and progressive failure of engine components. Then, a novel hybrid structure modeling approach was investigated and incorporated into the crashworthiness methodology. Such a tool is invaluable to the process of design, development, and certification of future advanced propulsion systems.
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Integrated smart panel and support structure response
NASA Astrophysics Data System (ADS)
DeGiorgi, Virginia G.
1998-06-01
The performance of smart structures is a complex interaction between active and passive components. Active components, even when non-activated, can have an impact on structural performance and, conversely, structural characteristics of passive components can have a measurable impact on active component performance. The present work is an evaluation of the structural characteristics of an active panel designed for acoustic quieting. The support structure is included in the panel design as evaluated. Finite element methods are used to determine the active panel-support structure response. Two conditions are considered; a hollow unfilled support structure and the same structure filled with a polymer compound. Finite element models were defined so that stiffness values corresponding to the center of individual pistons could be determined. Superelement techniques were used to define mass and stiffness values representative of the combined active and support structure at the center of each piston. Results of interest obtained from the analysis include mode shapes, natural frequencies, and equivalent spring stuffiness for use in structural response models to represent the support structure. The effects on plate motion on piston performance cannot be obtained from this analysis, however mass and stiffness matrices for use in an integrated system model to determine piston head velocities can be obtained from this work.
ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.
Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less
Landlab: an Open-Source Python Library for Modeling Earth Surface Dynamics
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Hobley, D. E. J.; Hutton, E.; Nudurupati, S. S.; Istanbulluoglu, E.; Tucker, G. E.
2016-12-01
Landlab is an open-source Python modeling library that enables users to easily build unique models to explore earth surface dynamics. The Landlab library provides a number of tools and functionalities that are common to many earth surface models, thus eliminating the need for a user to recode fundamental model elements each time she explores a new problem. For example, Landlab provides a gridding engine so that a user can build a uniform or nonuniform grid in one line of code. The library has tools for setting boundary conditions, adding data to a grid, and performing basic operations on the data, such as calculating gradients and curvature. The library also includes a number of process components, which are numerical implementations of physical processes. To create a model, a user creates a grid and couples together process components that act on grid variables. The current library has components for modeling a diverse range of processes, from overland flow generation to bedrock river incision, from soil wetting and drying to vegetation growth, succession and death. The code is freely available for download (https://github.com/landlab/landlab) or can be installed as a Python package. Landlab models can also be built and run on Hydroshare (www.hydroshare.org), an online collaborative environment for sharing hydrologic data, models, and code. Tutorials illustrating a wide range of Landlab capabilities such as building a grid, setting boundary conditions, reading in data, plotting, using components and building models are also available (https://github.com/landlab/tutorials). The code is also comprehensively documented both online and natively in Python. In this presentation, we illustrate the diverse capabilities of Landlab. We highlight existing functionality by illustrating outcomes from a range of models built with Landlab - including applications that explore landscape evolution and ecohydrology. Finally, we describe the range of resources available for new users.
V-SUIT Model Validation Using PLSS 1.0 Test Results
NASA Technical Reports Server (NTRS)
Olthoff, Claas
2015-01-01
The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.
Siegrist, Johannes; Li, Jian
2016-04-19
Mainstream psychological stress theory claims that it is important to include information on people's ways of coping with work stress when assessing the impact of stressful psychosocial work environments on health. Yet, some widely used respective theoretical models focus exclusively on extrinsic factors. The model of effort-reward imbalance (ERI) differs from them as it explicitly combines information on extrinsic and intrinsic factors in studying workers' health. As a growing number of studies used the ERI model in recent past, we conducted a systematic review of available evidence, with a special focus on the distinct contribution of its intrinsic component, the coping pattern "over-commitment", towards explaining health. Moreover, we explore whether the interaction of intrinsic and extrinsic components exceeds the size of effects on health attributable to single components. Results based on 51 reports document an independent explanatory role of "over-commitment" in explaining workers' health in a majority of studies. However, support in favour of the interaction hypothesis is limited and requires further exploration. In conclusion, the findings of this review support the usefulness of a work stress model that combines extrinsic and intrinsic components in terms of scientific explanation and of designing more comprehensive worksite stress prevention programs.
Siegrist, Johannes; Li, Jian
2016-01-01
Mainstream psychological stress theory claims that it is important to include information on people’s ways of coping with work stress when assessing the impact of stressful psychosocial work environments on health. Yet, some widely used respective theoretical models focus exclusively on extrinsic factors. The model of effort-reward imbalance (ERI) differs from them as it explicitly combines information on extrinsic and intrinsic factors in studying workers’ health. As a growing number of studies used the ERI model in recent past, we conducted a systematic review of available evidence, with a special focus on the distinct contribution of its intrinsic component, the coping pattern “over-commitment”, towards explaining health. Moreover, we explore whether the interaction of intrinsic and extrinsic components exceeds the size of effects on health attributable to single components. Results based on 51 reports document an independent explanatory role of “over-commitment” in explaining workers’ health in a majority of studies. However, support in favour of the interaction hypothesis is limited and requires further exploration. In conclusion, the findings of this review support the usefulness of a work stress model that combines extrinsic and intrinsic components in terms of scientific explanation and of designing more comprehensive worksite stress prevention programs. PMID:27104548
Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2015-01-25
In this work, Raman hyperspectral images and multivariate curve resolution-alternating least squares (MCR-ALS) are used to study the distribution of actives and excipients within a pharmaceutical drug product. This article is mainly focused on the distribution of a low dose constituent. Different approaches are compared, using initially filtered or non-filtered data, or using a column-wise augmented dataset before starting the MCR-ALS iterative process including appended information on the low dose component. In the studied formulation, magnesium stearate is used as a lubricant to improve powder flowability. With a theoretical concentration of 0.5% (w/w) in the drug product, the spectral variance contained in the data is weak. By using a principal component analysis (PCA) filtered dataset as a first step of the MCR-ALS approach, the lubricant information is lost in the non-explained variance and its associated distribution in the tablet cannot be highlighted. A sufficient number of components to generate the PCA noise-filtered matrix has to be used in order to keep the lubricant variability within the data set analyzed or, otherwise, work with the raw non-filtered data. Different models are built using an increasing number of components to perform the PCA reduction. It is shown that the magnesium stearate information can be extracted from a PCA model using a minimum of 20 components. In the last part, a column-wise augmented matrix, including a reference spectrum of the lubricant, is used before starting MCR-ALS process. PCA reduction is performed on the augmented matrix, so the magnesium stearate contribution is included within the MCR-ALS calculations. By using an appropriate PCA reduction, with a sufficient number of components, or by using an augmented dataset including appended information on the low dose component, the distribution of the two actives, the two main excipients and the low dose lubricant are correctly recovered. Copyright © 2014 Elsevier B.V. All rights reserved.
Dimensionality and consequences of employee commitment to supervisors: a two-study examination.
Landry, Guylaine; Panaccio, Alexandra; Vandenberghe, Christian
2010-01-01
Research on the 3-component model of organizational commitment--affective, normative, and continuance--has suggested that continuance commitment comprises 2 subcomponents, perceived lack of alternatives and sacrifice (e.g., S. J. Jaros, 1997; G. W. McGee & R. C. Ford, 1987). The authors aimed to extend that research in the context of employees' commitment to their immediate supervisors. Through two studies, they examined the validity and consequences of a 4-factor model of commitment to supervisors including affective, normative, continuance-alternatives, and continuance-sacrifice components. Study 1 (N = 317) revealed that the 4 components of commitment to supervisors were distinguishable from the corresponding components of organizational commitment. Study 2 (N = 240) further showed that the 4 components of commitment to supervisors differentially related to intention to leave the supervisor, supervisor-directed negative affect and emotional exhaustion. The authors discuss the implications of these findings for the management of employee commitment in organizations.
Estimation of Soil Moisture with L-band Multi-polarization Radar
NASA Technical Reports Server (NTRS)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
Analysis of Brown camera distortion model
NASA Astrophysics Data System (ADS)
Nowakowski, Artur; Skarbek, Władysław
2013-10-01
Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.
Engine System Model Development for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Nelson, Karl W.; Simpson, Steven P.
2006-01-01
In order to design, analyze, and evaluate conceptual Nuclear Thermal Propulsion (NTP) engine systems, an improved NTP design and analysis tool has been developed. The NTP tool utilizes the Rocket Engine Transient Simulation (ROCETS) system tool and many of the routines from the Enabler reactor model found in Nuclear Engine System Simulation (NESS). Improved non-nuclear component models and an external shield model were added to the tool. With the addition of a nearly complete system reliability model, the tool will provide performance, sizing, and reliability data for NERVA-Derived NTP engine systems. A new detailed reactor model is also being developed and will replace Enabler. The new model will allow more flexibility in reactor geometry and include detailed thermal hydraulics and neutronics models. A description of the reactor, component, and reliability models is provided. Another key feature of the modeling process is the use of comprehensive spreadsheets for each engine case. The spreadsheets include individual worksheets for each subsystem with data, plots, and scaled figures, making the output very useful to each engineering discipline. Sample performance and sizing results with the Enabler reactor model are provided including sensitivities. Before selecting an engine design, all figures of merit must be considered including the overall impacts on the vehicle and mission. Evaluations based on key figures of merit of these results and results with the new reactor model will be performed. The impacts of clustering and external shielding will also be addressed. Over time, the reactor model will be upgraded to design and analyze other NTP concepts with CERMET and carbide fuel cores.
ERIC Educational Resources Information Center
Richards, Debbie
1998-01-01
Describes a set of manipulatives that are used to establish a secure understanding of the concepts related to the environmental factors that affect the activities of enzymes. Includes a description of the model components and procedures for construction of the model. (DDR)
Venus Global Reference Atmospheric Model
NASA Technical Reports Server (NTRS)
Justh, Hilary L.
2017-01-01
Venus Global Reference Atmospheric Model (Venus-GRAM) is an engineering-level atmospheric model developed by MSFC that is widely used for diverse mission applications including: Systems design; Performance analysis; Operations planning for aerobraking, Entry, Descent and Landing, and aerocapture; Is not a forecast model; Outputs include density, temperature, pressure, wind components, and chemical composition; Provides dispersions of thermodynamic parameters, winds, and density; Optional trajectory and auxiliary profile input files Has been used in multiple studies and proposals including NASA Engineering and Safety Center (NESC) Autonomous Aerobraking and various Discovery proposals; Released in 2005; Available at: https://software.nasa.gov/software/MFS-32314-1.
Using McStas for modelling complex optics, using simple building bricks
NASA Astrophysics Data System (ADS)
Willendrup, Peter K.; Udby, Linda; Knudsen, Erik; Farhi, Emmanuel; Lefmann, Kim
2011-04-01
The McStas neutron ray-tracing simulation package is a versatile tool for producing accurate neutron simulations, extensively used for design and optimization of instruments, virtual experiments, data analysis and user training.In McStas, component organization and simulation flow is intrinsically linear: the neutron interacts with the beamline components in a sequential order, one by one. Historically, a beamline component with several parts had to be implemented with a complete, internal description of all these parts, e.g. a guide component including all four mirror plates and required logic to allow scattering between the mirrors.For quite a while, users have requested the ability to allow “components inside components” or meta-components, allowing to combine functionality of several simple components to achieve more complex behaviour, i.e. four single mirror plates together defining a guide.We will here show that it is now possible to define meta-components in McStas, and present a set of detailed, validated examples including a guide with an embedded, wedged, polarizing mirror system of the Helmholtz-Zentrum Berlin type.
2010-06-01
data such as the NSMB B-series, or be based on hydrodynamic (lifting line) predict ions. The power including still air drag and any margin that is...Provide Fuel Function 3.6 Fuel Oil System Component REQ.1.4 Fuel Efficiency Requirement 1.1 Generate Mechanical En... Function 1.1 Prime Mover Component...3.3 Provide Lubrication Function 3.7 Lube Oil System Component 3.4 Provide Cooling Water Function 3.3 Cooling System Component 3.5 Provide Combust ion
NASA Astrophysics Data System (ADS)
Vasseur, Romain; Lookman, Turab; Shenoy, Subodh R.
2010-09-01
We show how microstructure can arise in first-order ferroelastic structural transitions, in two and three spatial dimensions, through a local mean-field approximation of their pseudospin Hamiltonians, that include anisotropic elastic interactions. Such transitions have symmetry-selected physical strains as their NOP -component order parameters, with Landau free energies that have a single zero-strain “austenite” minimum at high temperatures, and spontaneous-strain “martensite” minima of NV structural variants at low temperatures. The total free energy also has gradient terms, and power-law anisotropic effective interactions, induced by “no-dislocation” St Venant compatibility constraints. In a reduced description, the strains at Landau minima induce temperature dependent, clocklike ZNV+1 Hamiltonians, with NOP -component strain-pseudospin vectors S⃗ pointing to NV+1 discrete values (including zero). We study elastic texturing in five such first-order structural transitions through a local mean-field approximation of their pseudospin Hamiltonians, that include the power-law interactions. As a prototype, we consider the two-variant square/rectangle transition, with a one-component pseudospin taking NV+1=3 values of S=0,±1 , as in a generalized Blume-Capel model. We then consider transitions with two-component (NOP=2) pseudospins: the equilateral to centered rectangle (NV=3) ; the square to oblique polygon (NV=4) ; the triangle to oblique (NV=6) transitions; and finally the three-dimensional (3D) cubic to tetragonal transition (NV=3) . The local mean-field solutions in two-dimensional and 3D yield oriented domain-wall patterns as from continuous-variable strain dynamics, showing the discrete-variable models capture the essential ferroelastic texturings. Other related Hamiltonians illustrate that structural transitions in materials science can be the source of interesting spin models in statistical mechanics.
Parallel Optimization of an Earth System Model (100 Gigaflops and Beyond?)
NASA Technical Reports Server (NTRS)
Drummond, L. A.; Farrara, J. D.; Mechoso, C. R.; Spahr, J. A.; Chao, Y.; Katz, S.; Lou, J. Z.; Wang, P.
1997-01-01
We are developing an Earth System Model (ESM) to be used in research aimed to better understand the interactions between the components of the Earth System and to eventually predict their variations. Currently, our ESM includes models of the atmosphere, oceans and the important chemical tracers therein.
The Vroom and Yetton Normative Leadership Model Applied to Public School Case Examples.
ERIC Educational Resources Information Center
Sample, John
This paper seeks to familiarize school administrators with the Vroom and Yetton Normative Leadership model by presenting its essential components and providing original case studies for its application to school settings. The five decision-making methods of the Vroom and Yetton model, including two "autocratic," two…
Learning Molecular Behaviour May Improve Student Explanatory Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Harris, Sara E.; Gold, Anne U.
2018-01-01
We assessed undergraduates' representations of the greenhouse effect, based on student-generated concept sketches, before and after a 30-min constructivist lesson. Principal component analysis of features in student sketches revealed seven distinct and coherent explanatory models including a new "Molecular Details" model. After the…
Ho, Hsing-Hao; Li, Ya-Hui; Lee, Jih-Chin; Wang, Chih-Wei; Yu, Yi-Lin; Hueng, Dueng-Yuan; Hsu, Hsian-He
2018-01-01
Purpose We estimated the volume of vestibular schwannomas by an ice cream cone formula using thin-sliced magnetic resonance images (MRI) and compared the estimation accuracy among different estimating formulas and between different models. Methods The study was approved by a local institutional review board. A total of 100 patients with vestibular schwannomas examined by MRI between January 2011 and November 2015 were enrolled retrospectively. Informed consent was waived. Volumes of vestibular schwannomas were estimated by cuboidal, ellipsoidal, and spherical formulas based on a one-component model, and cuboidal, ellipsoidal, Linskey’s, and ice cream cone formulas based on a two-component model. The estimated volumes were compared to the volumes measured by planimetry. Intraobserver reproducibility and interobserver agreement was tested. Estimation error, including absolute percentage error (APE) and percentage error (PE), was calculated. Statistical analysis included intraclass correlation coefficient (ICC), linear regression analysis, one-way analysis of variance, and paired t-tests with P < 0.05 considered statistically significant. Results Overall tumor size was 4.80 ± 6.8 mL (mean ±standard deviation). All ICCs were no less than 0.992, suggestive of high intraobserver reproducibility and high interobserver agreement. Cuboidal formulas significantly overestimated the tumor volume by a factor of 1.9 to 2.4 (P ≤ 0.001). The one-component ellipsoidal and spherical formulas overestimated the tumor volume with an APE of 20.3% and 29.2%, respectively. The two-component ice cream cone method, and ellipsoidal and Linskey’s formulas significantly reduced the APE to 11.0%, 10.1%, and 12.5%, respectively (all P < 0.001). Conclusion The ice cream cone method and other two-component formulas including the ellipsoidal and Linskey’s formulas allow for estimation of vestibular schwannoma volume more accurately than all one-component formulas. PMID:29438424
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Boyce, Lola
1995-01-01
The development of methodology for a probabilistic material strength degradation is described. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing predictions of high-cycle mechanical fatigue and high temperature effects with experiments are presented. Results from this limited verification study strongly supported that material degradation can be represented by randomized multifactor interaction models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nee, K.; Bryan, S.; Levitskaia, T.
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Nee, K.; Bryan, S.; Levitskaia, T.; ...
2017-12-28
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A
2018-01-01
Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD. In total, all EF components were significant and equally effective predictors of RD when RD was operationalized using the hybrid model.
Daucourt, Mia C.; Schatschneider, Christopher; Connor, Carol M.; Al Otaiba, Stephanie; Hart, Sara A.
2018-01-01
Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD. In total, all EF components were significant and equally effective predictors of RD when RD was operationalized using the hybrid model. PMID:29662458
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
NASA Astrophysics Data System (ADS)
Christensen, Niels B.; Lawrie, Ken
2015-06-01
We analyse and compare the resolution improvement that can be obtained from including x-component data in the inversion of AEM data from the SkyTEM and TEMPEST systems. Except for the resistivity of the bottom layer, the SkyTEM system, even without including x-component data, has the better resolution of the parameters of the analysed models.
Simulink models for performance analysis of high speed DQPSK modulated optical link
NASA Astrophysics Data System (ADS)
Sharan, Lucky; Rupanshi, Chaubey, V. K.
2016-03-01
This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhanced or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.
Simulink models for performance analysis of high speed DQPSK modulated optical link
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharan, Lucky, E-mail: luckysharan@pilani.bits-pilani.ac.in; Rupanshi,, E-mail: f2011222@pilani.bits-pilani.ac.in; Chaubey, V. K., E-mail: vkc@pilani.bits-pilani.ac.in
2016-03-09
This paper attempts to present the design approach for development of simulation models to study and analyze the transmission of 10 Gbps DQPSK signal over a single channel Peer to Peer link using Matlab Simulink. The simulation model considers the different optical components used in link design with their behavior represented initially by theoretical interpretation, including the transmitter topology, Mach Zehnder Modulator(MZM) module and, the propagation model for optical fibers etc. thus allowing scope for direct realization in experimental configurations. It provides the flexibility to incorporate the various photonic components as either user-defined or fixed and, can also be enhancedmore » or removed from the model as per the design requirements. We describe the detailed operation and need of every component model and its representation in Simulink blocksets. Moreover the developed model can be extended in future to support Dense Wavelength Division Multiplexing (DWDM) system, thereby allowing high speed transmission with N × 40 Gbps systems. The various compensation techniques and their influence on system performance can be easily investigated by using such models.« less
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George
2010-01-01
This paper covers the propulsion system component modeling and controls development of an integrated mixed compression inlet and turbojet engine that will be used for an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. Using previously created nonlinear component-level propulsion system models, a linear integrated propulsion system model and loop shaping control design have been developed. The design includes both inlet normal shock position control and jet engine rotor speed control for a potential supersonic commercial transport. A preliminary investigation of the impacts of the aero-elastic effects on the incoming flow field to the propulsion system are discussed, however, the focus here is on developing a methodology for the propulsion controls design that prevents unstart in the inlet and minimizes the thrust oscillation experienced by the vehicle. Quantitative Feedback Theory (QFT) specifications and bounds, and aspects of classical loop shaping are used in the control design process. Model uncertainty is incorporated in the design to address possible error in the system identification mapping of the nonlinear component models into the integrated linear model.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
A Raster Based Approach To Solar Pressure Modeling
NASA Technical Reports Server (NTRS)
Wright, Theodore
2014-01-01
The impact of photons upon a spacecraft introduces small forces and moments. The magnitude and direction of the forces depend on the material properties of the spacecraft components being illuminated. Which components are being lit depends on the orientation of the craft with respect to the Sun as well as the gimbal angles for any significant moving external parts (solar arrays, typically). Some components may shield others from the Sun.To determine solar pressure in the presence overlapping components, a 3D model can be used to determine which components are illuminated. A view (image) of the model as seen from the Sun shows the only contributors to solar pressure. This image can be decomposed into pixels, each of which can be treated as a non-overlapping flat plate as far as solar pressure calculations are concerned. The sums of the pressures and moments on these plates approximate the solar pressure and moments on the entire vehicle.The image rasterization technique can also be used to compute other spacecraft attributes that are dependent on attitude and geometry, including solar array power generation capability and free molecular flow drag.
Biochemical Kinetics Model of DSB Repair and GammaH2AX FOCI by Non-homologous End Joining
NASA Technical Reports Server (NTRS)
Cucinotta, Francis, A.; Pluth, Janice M.; Anderson, Jennifer A.; Harper, Jane V.; O'Neill, Peter
2007-01-01
We developed a biochemical kinetics approach to describe the repair of double strand breaks (DSB) produced by low LET radiation by modeling molecular events associated with the mechanisms of non-homologous end-joining (NHEJ). A system of coupled non-linear ordinary differential equations describes the induction of DSB and activation pathways for major NHEJ components including Ku(sub 70/80), DNA-PK(sub cs), and the Ligase IV-XRCC4 hetero-dimer. The autophosphorylation of DNA-PK(sub cs and subsequent induction of gamma-H2AX foci observed after ionizing radiation exposure were modeled. A two-step model of DNA-PK(sub cs) regulation of repair was developed with the initial step allowing access of other NHEJ components to breaks, and a second step limiting access to Ligase IV-XRCC4. Our model assumes that the transition from the first to second-step depends on DSB complexity, with a much slower-rate for complex DSB. The model faithfully reproduced several experimental data sets, including DSB rejoining as measured by pulsed-field electrophoresis (PFGE), quantification of the induction of gamma-H2AX foci, and live cell imaging of the induction of Ku(sub 70/80). Predictions are made for the behaviors of NHEJ components at low doses and dose-rates, where a steady-state is found at dose-rates of 0.1 Gy/hr or lower.
NASA Technical Reports Server (NTRS)
Santare, Michael H.; Pipes, R. Byron; Beaussart, A. J.; Coffin, D. W.; Otoole, B. J.; Shuler, S. F.
1993-01-01
Flexible manufacturing methods are needed to reduce the cost of using advanced composites in structural applications. One method that allows for this is the stretch forming of long discontinuous fiber materials with thermoplastic matrices. In order to exploit this flexibility in an economical way, a thorough understanding of the relationship between manufacturing and component performance must be developed. This paper reviews some of the recent work geared toward establishing this understanding. Micromechanics models have been developed to predict the formability of the material during processing. The latest improvement of these models includes the viscoelastic nature of the matrix and comparison with experimental data. A finite element scheme is described which can be used to model the forming process. This model uses equivalent anisotropic viscosities from the micromechanics models and predicts the microstructure in the formed part. In addition, structural models have been built to account for the material property gradients that can result from the manufacturing procedures. Recent developments in this area include the analysis of stress concentrations and a failure model each accounting for the heterogeneous material fields.
Host Model Uncertainty in Aerosol Radiative Effects: the AeroCom Prescribed Experiment and Beyond
NASA Astrophysics Data System (ADS)
Stier, Philip; Schutgens, Nick; Bian, Huisheng; Boucher, Olivier; Chin, Mian; Ghan, Steven; Huneeus, Nicolas; Kinne, Stefan; Lin, Guangxing; Myhre, Gunnar; Penner, Joyce; Randles, Cynthia; Samset, Bjorn; Schulz, Michael; Yu, Hongbin; Zhou, Cheng; Bellouin, Nicolas; Ma, Xiaoyan; Yu, Fangqun; Takemura, Toshihiko
2013-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. Multi-model "diversity" in estimates of the aerosol radiative effect is often perceived as a measure of the uncertainty in modelling aerosol itself. However, current aerosol models vary considerably in model components relevant for the calculation of aerosol radiative forcings and feedbacks and the associated "host-model uncertainties" are generally convoluted with the actual uncertainty in aerosol modelling. In the AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in eleven participating models. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention. However, uncertainties in aerosol radiative effects also include short-term and long-term feedback processes that will be systematically explored in future intercomparison studies. Here we will present an overview of the proposals for discussion and results from early scoping studies.
Single-Trial Analysis of V1 Responses Suggests Two Transmission States
NASA Technical Reports Server (NTRS)
Shah, A. S.; Knuth, K. H.; Truccolo, W. A.; Mehta, A. D.; McGinnis, T.; OConnell, N.; Ding, M.; Bressler, S. L.; Schroeder, C. E.
2002-01-01
Sensory processing in the visual, auditory, and somatosensory systems is often studied by recording electrical activity in response to a stimulus of interest. Typically, multiple trial responses to the stimulus are averaged to isolate the stereotypic response from noise. However, averaging ignores dynamic variability in the neuronal response, which is potentially critical to understanding stimulus-processing schemes. Thus, we developed the multiple component, Event-Related Potential (mcERP) model. This model asserts that multiple components, defined as stereotypic waveforms, comprise the stimulus-evoked response and that these components may vary in amplitude and latency from trial to trial. Application of this model to data recorded simultaneously from all six laminae of V1 in an awake, behaving monkey performing a visual discrimination yielded three components. The first component localized to granular V1, the second was located in supragranular V1, and the final component displayed a multi-laminar distribution. These modeling results, which take into account single-trial response dynamics, illustrated that the initial activation of VI occurs in the granular layer followed by activation in the supragranular layers. This finding is expected because the average response in those layers demonstrates the same progression and because anatomical evidence suggests that the feedforward input in V1 enters the granular layer and progresses to supragranular layers. In addition to these findings, the granular component of the model displayed several interesting trial-to-trial characteristics including (1) a bimodal latency distribution, (2) a latency-related variation in response amplitude, (3) a latency correlation with the supragranular component, and (4) an amplitude and latency association with the multi-laminar component. Direct analyses of the single-trial data were consistent with these model predictions. These findings suggest that V1 has at least 2 transmission states, which may be modulated by various effects such as attention, dynamics in local EEG rhythm, or variation in sensory inputs.
MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package
Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...
2015-11-28
MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less
Lithium Circuit Test Section Design and Fabrication
NASA Technical Reports Server (NTRS)
Godfroy, Thomas; Garber, Anne
2006-01-01
The Early Flight Fission - Test Facilities (EFF-TF) team has designed and built an actively pumped lithium flow circuit. Modifications were made to a circuit originally designed for NaK to enable the use of lithium that included application specific instrumentation and hardware. Component scale freeze/thaw tests were conducted to both gain experience with handling and behavior of lithium in solid and liquid form and to supply anchor data for a Generalized Fluid System Simulation Program (GFSSP) model that was modified to include the physics for freeze/thaw transitions. Void formation was investigated. The basic circuit components include: reactor segment, lithium to gas heat exchanger, electromagnetic (EM) liquid metal pump, load/drain reservoir, expansion reservoir, instrumentation, and trace heaters. This paper will discuss the overall system design and build and the component testing findings.
Lithium Circuit Test Section Design and Fabrication
NASA Astrophysics Data System (ADS)
Godfroy, Thomas; Garber, Anne; Martin, James
2006-01-01
The Early Flight Fission - Test Facilities (EFF-TF) team has designed and built an actively pumped lithium flow circuit. Modifications were made to a circuit originally designed for NaK to enable the use of lithium that included application specific instrumentation and hardware. Component scale freeze/thaw tests were conducted to both gain experience with handling and behavior of lithium in solid and liquid form and to supply anchor data for a Generalized Fluid System Simulation Program (GFSSP) model that was modified to include the physics for freeze/thaw transitions. Void formation was investigated. The basic circuit components include: reactor segment, lithium to gas heat exchanger, electromagnetic (EM) liquid metal pump, load/drain reservoir, expansion reservoir, instrumentation, and trace heaters. This paper discusses the overall system design and build and the component testing findings.
SMART Structures User's Guide - Version 3.0
NASA Technical Reports Server (NTRS)
Spangler, Jan L.
1996-01-01
Version 3.0 of the Solid Modeling Aerospace Research Tool (SMART Structures) is used to generate structural models for conceptual and preliminary-level aerospace designs. Features include the generation of structural elements for wings and fuselages, the integration of wing and fuselage structural assemblies, and the integration of fuselage and tail structural assemblies. The highly interactive nature of this software allows the structural engineer to move quickly from a geometry that defines a vehicle's external shape to one that has both external components and internal components which may include ribs, spars, longerons, variable depth ringframes, a floor, a keel, and fuel tanks. The geometry that is output is consistent with FEA requirements and includes integrated wing and empennage carry-through and frame attachments. This report provides a comprehensive description of SMART Structures and how to use it.
Code System to Calculate Tornado-Induced Flow Material Transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ANDRAE, R. W.
1999-11-18
Version: 00 TORAC models tornado-induced flows, pressures, and material transport within structures. Its use is directed toward nuclear fuel cycle facilities and their primary release pathway, the ventilation system. However, it is applicable to other structures and can model other airflow pathways within a facility. In a nuclear facility, this network system could include process cells, canyons, laboratory offices, corridors, and offgas systems. TORAC predicts flow through a network system that also includes ventilation system components such as filters, dampers, ducts, and blowers. These ventilation system components are connected to the rooms and corridors of the facility to form amore » complete network for moving air through the structure and, perhaps, maintaining pressure levels in certain areas. The material transport capability in TORAC is very basic and includes convection, depletion, entrainment, and filtration of material.« less
Hanchaiphiboolkul, Suchat; Suwanwela, Nijasri Charnnarong; Poungvarin, Niphon; Nidhinandana, Samart; Puthkhao, Pimchanok; Towanabut, Somchai; Tantirittisak, Tasanee; Suwantamee, Jithanorm; Samsen, Maiyadhaj
2013-11-01
Limited information is available on the association between the metabolic syndrome (MetS) and stroke. Whether or not MetS confers a risk greater than the sum of its components is controversial. This study aimed to assess the association of MetS with stroke, and to evaluate whether the risk of MetS is greater than the sum of its components. The Thai Epidemiologic Stroke (TES) study is a community-based cohort study with 19,997 participants, aged 45-80 years, recruited from the general population from 5 regions of Thailand. Baseline survey data were analyzed in cross-sectional analyses. MetS was defined according to criteria from the National Cholesterol Education Program (NCEP) Adult Treatment Panel III, the American Heart Association/National Heart, Lung, and Blood Institute (revised NCEP), and International Diabetes Federation (IDF). Logistic regression analysis was used to estimate association of MetS and its components with stroke. Using c statistics and the likelihood ratio test we compared the capability of discriminating participants with and without stroke of a logistic model containing all components of MetS and potential confounders and a model also including the MetS variable. We found that among the MetS components, high blood pressure and hypertriglyceridemia were independently and significantly related to stroke. MetS defined by the NCEP (odds ratio [OR], 1.64; 95% confidence interval [CI], 1.32-2.04), revised NCEP (OR, 2.27; 95% CI, 1.80-2.87), and IDF definitions (OR, 1.70; 95% CI, 1.37-2.13) was significantly associated with stroke after adjustment for age, sex, geographical area, education level, occupation, smoking status, alcohol consumption, and low-density lipoprotein cholesterol. After additional adjustment for all MetS components, these associations were not significant. There were no statistically significant difference (P=.723-.901) in c statistics between the model containing all MetS components and potential confounders and the model also including the MetS variable. The likelihood ratio test also showed no statistically significant (P=.166-.718) difference between these 2 models. Our findings suggest that MetS is associated with stroke, but not to a greater degree than the sum of its components. Thus, the focus should be on identification and appropriate control of its individual components, particularly high blood pressure and hypertriglyceridemia, rather than of MetS itself. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.
LOSCAR: Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir Model
NASA Astrophysics Data System (ADS)
Zeebe, R. E.
2011-06-01
The LOSCAR model is designed to efficiently compute the partitioning of carbon between ocean, atmosphere, and sediments on time scales ranging from centuries to millions of years. While a variety of computationally inexpensive carbon cycle models are already available, many are missing a critical sediment component, which is indispensable for long-term integrations. One of LOSCAR's strengths is the coupling of ocean-atmosphere routines to a computationally efficient sediment module. This allows, for instance, adequate computation of CaCO3 dissolution, calcite compensation, and long-term carbon cycle fluxes, including weathering of carbonate and silicate rocks. The ocean component includes various biogeochemical tracers such as total carbon, alkalinity, phosphate, oxygen, and stable carbon isotopes. We have previously published applications of the model tackling future projections of ocean chemistry and weathering, pCO2 sensitivity to carbon cycle perturbations throughout the Cenozoic, and carbon/calcium cycling during the Paleocene-Eocene Thermal Maximum. The focus of the present contribution is the detailed description of the model including numerical architecture, processes and parameterizations, tuning, and examples of input and output. Typical CPU integration times of LOSCAR are of order seconds for several thousand model years on current standard desktop machines. The LOSCAR source code in C can be obtained from the author by sending a request to loscar.model@gmail.com.
Landlab: A numerical modeling framework for evolving Earth surfaces from mountains to the coast
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Adams, J. M.; Tucker, G. E.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.
2016-02-01
Landlab is an open-source, user-friendly, component-based modeling framework for exploring the evolution of Earth's surface. Landlab itself is not a model. Instead, it is a computational framework that facilitates the development of numerical models of coupled earth surface processes. The Landlab Python library includes a gridding engine and process components, along with support functions for tasks such as reading in DEM data and input variables, setting boundary conditions, and plotting and outputting data. Each user of Landlab builds his or her own unique model. The first step in building a Landlab model is generally initializing a grid, either regular (raster) or irregular (e.g. delaunay or radial), and process components. This initialization process involves reading in relevant parameter values and data. The process components act on the grid to alter grid properties over time. For example, a component exists that can track the growth, death, and succession of vegetation over time. There are also several components that evolve surface elevation, through processes such as fluvial sediment transport and linear diffusion, among others. Users can also build their own process components, taking advantage of existing functions in Landlab such as those that identify grid connectivity and calculate gradients and flux divergence. The general nature of the framework makes it applicable to diverse environments - from bedrock rivers to a pile of sand - and processes acting over a range of spatial and temporal scales. In this poster we illustrate how a user builds a model using Landlab and propose a number of ways in which Landlab can be applied in coastal environments - from dune migration to channelization of barrier islands. We seek input from the coastal community as to how the process component library can be expanded to explore the diverse phenomena that act to shape coastal environments.
The Community Climate System Model.
NASA Astrophysics Data System (ADS)
Blackmon, Maurice; Boville, Byron; Bryan, Frank; Dickinson, Robert; Gent, Peter; Kiehl, Jeffrey; Moritz, Richard; Randall, David; Shukla, Jagadish; Solomon, Susan; Bonan, Gordon; Doney, Scott; Fung, Inez; Hack, James; Hunke, Elizabeth; Hurrell, James; Kutzbach, John; Meehl, Jerry; Otto-Bliesner, Bette; Saravanan, R.; Schneider, Edwin K.; Sloan, Lisa; Spall, Michael; Taylor, Karl; Tribbia, Joseph; Washington, Warren
2001-11-01
The Community Climate System Model (CCSM) has been created to represent the principal components of the climate system and their interactions. Development and applications of the model are carried out by the U.S. climate research community, thus taking advantage of both wide intellectual participation and computing capabilities beyond those available to most individual U.S. institutions. This article outlines the history of the CCSM, its current capabilities, and plans for its future development and applications, with the goal of providing a summary useful to present and future users. The initial version of the CCSM included atmosphere and ocean general circulation models, a land surface model that was grafted onto the atmosphere model, a sea-ice model, and a flux coupler that facilitates information exchanges among the component models with their differing grids. This version of the model produced a successful 300-yr simulation of the current climate without artificial flux adjustments. The model was then used to perform a coupled simulation in which the atmospheric CO2 concentration increased by 1% per year. In this version of the coupled model, the ocean salinity and deep-ocean temperature slowly drifted away from observed values. A subsequent correction to the roughness length used for sea ice significantly reduced these errors. An updated version of the CCSM was used to perform three simulations of the twentieth century's climate, and several pro-jections of the climate of the twenty-first century. The CCSM's simulation of the tropical ocean circulation has been significantly improved by reducing the background vertical diffusivity and incorporating an anisotropic horizontal viscosity tensor. The meridional resolution of the ocean model was also refined near the equator. These changes have resulted in a greatly improved simulation of both the Pacific equatorial undercurrent and the surface countercurrents. The interannual variability of the sea surface temperature in the central and eastern tropical Pacific is also more realistic in simulations with the updated model. Scientific challenges to be addressed with future versions of the CCSM include realistic simulation of the whole atmosphere, including the middle and upper atmosphere, as well as the troposphere; simulation of changes in the chemical composition of the atmosphere through the incorporation of an integrated chemistry model; inclusion of global, prognostic biogeochemical components for land, ocean, and atmosphere; simulations of past climates, including times of extensive continental glaciation as well as times with little or no ice; studies of natural climate variability on seasonal-to-centennial timescales; and investigations of anthropogenic climate change. In order to make such studies possible, work is under way to improve all components of the model. Plans call for a new version of the CCSM to be released in 2002. Planned studies with the CCSM will require much more computer power than is currently available.
Optimal plant nitrogen use improves model representation of vegetation response to elevated CO2
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Kern, Melanie; Engel, Jan; Zaehle, Sönke
2017-04-01
Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.
Crash location correction for freeway interchange modeling : final report.
DOT National Transportation Integrated Search
2016-06-01
AASHTO released a supplement to the Highway Safety Manual (HSM) in 2014 that includes models for freeway : interchanges composed of segments, speed-change lanes and terminals. A necessary component to the use of HSM is : having the appropriate safety...
A REVIEW OF BIOACCUMULATION MODELING APPROACHES FOR PERSISTENT ORGANIC POLLUTANTS
Persistent organic pollutants and mercury are likely to bioaccumulate in biological components of the environment, including fish and wildlife. The complex and long-term dynamics involved with bioaccumulation are often represented with models. Current scientific developments in t...
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
Prediction of Sliding Friction Coefficient Based on a Novel Hybrid Molecular-Mechanical Model.
Zhang, Xiaogang; Zhang, Yali; Wang, Jianmei; Sheng, Chenxing; Li, Zhixiong
2018-08-01
Sliding friction is a complex phenomenon which arises from the mechanical and molecular interactions of asperities when examined in a microscale. To reveal and further understand the effects of micro scaled mechanical and molecular components of friction coefficient on overall frictional behavior, a hybrid molecular-mechanical model is developed to investigate the effects of main factors, including different loads and surface roughness values, on the sliding friction coefficient in a boundary lubrication condition. Numerical modelling was conducted using a deterministic contact model and based on the molecular-mechanical theory of friction. In the contact model, with given external loads and surface topographies, the pressure distribution, real contact area, and elastic/plastic deformation of each single asperity contact were calculated. Then asperity friction coefficient was predicted by the sum of mechanical and molecular components of friction coefficient. The mechanical component was mainly determined by the contact width and elastic/plastic deformation, and the molecular component was estimated as a function of the contact area and interfacial shear stress. Numerical results were compared with experimental results and a good agreement was obtained. The model was then used to predict friction coefficients in different operating and surface conditions. Numerical results explain why applied load has a minimum effect on the friction coefficients. They also provide insight into the effect of surface roughness on the mechanical and molecular components of friction coefficients. It is revealed that the mechanical component dominates the friction coefficient when the surface roughness is large (Rq > 0.2 μm), while the friction coefficient is mainly determined by the molecular component when the surface is relatively smooth (Rq < 0.2 μm). Furthermore, optimal roughness values for minimizing the friction coefficient are recommended.
J. G. Isebrands; G. E. Host; K. Lenz; G. Wu; H. W. Stech
2000-01-01
Process models are powerful research tools for assessing the effects of multiple environmental stresses on forest plantations. These models are driven by interacting environmental variables and often include genetic factors necessary for assessing forest plantation growth over a range of different site, climate, and silvicultural conditions. However, process models are...
Development of a high resolution interstellar dust engineering model - overview of the project
NASA Astrophysics Data System (ADS)
Sterken, V. J.; Strub, P.; Soja, R. H.; Srama, R.; Krüger, H.; Grün, E.
2013-09-01
Beyond 3 AU heliocentric distance, the flow of interstellar dust through the solar system is a dominant component of the total dust population. The modulation of this flux with the solar cycle and the position in the solar system has been predicted by theoretical studies since the seventies. The modulation was proven to exist by matching dust trajectory simulations with real spacecraft data from Ulysses in 1998. The modulations were further analyzed and studies in detail in 2012. The current ESA interplanetary meteoroid model IMEM includes an interstellar dust component, but this component was modelled only with straight line trajectories through the solar system. For the new ESA IMEX model, a high-resolution interstellar dust component is implemented separately from a dust streams module. The dust streams module focuses on dust in streams that was released from comets (cf. Abstract R. Soja). Parallel processing techniques are used to improve computation time (cf. Abstract P. Strub). The goal is to make predictions for the interstellar dust flux as close to the Sun as 1 AU or closer, for future space mission design.
ERIC Educational Resources Information Center
Fitzhugh, Shannon Leigh
2012-01-01
The study reported here tests a model that includes several factors thought to contribute to the comprehension of static multimedia learning materials (i.e. background knowledge, working memory, attention to components as measured with eye movement measures). The model examines the effects of working memory capacity, domain specific (biology) and…
Hyper-Book: A Formal Model for Electronic Books.
ERIC Educational Resources Information Center
Catenazzi, Nadia; Sommaruga, Lorenzo
1994-01-01
Presents a model for electronic books based on the paper book metaphor. Discussion includes how the book evolves under the effects of its functional components; the use and impact of the model for organizing and presenting electronic documents in the context of electronic publishing; and the possible applications of a system based on the model.…
Modeling Force Transfer around Openings in Wood-Frame Shear Walls
Minghao Li; Frank Lam; Borjen Yeh; Tom Skaggs; Doug Rammer; James Wacker
2012-01-01
This paper presented a modeling study on force transfer around openings (FTAO) in wood-frame shear walls detailed for FTAO. To understand the load transfer in the walls, this study used a finite-element model WALL2D, which is able to model individual wall components, including framing members, sheathing panels, oriented panel-frame nailed connections, framing...
WMT: The CSDMS Web Modeling Tool
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.
Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.
Siettos, Constantinos; Starke, Jens
2016-09-01
The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Multi-scale modeling of urban air pollution: development of a Street-in-Grid model
NASA Astrophysics Data System (ADS)
Kim, Youngseob; Wu, You; Seigneur, Christian; Roustan, Yelva
2016-04-01
A new multi-scale model of urban air pollution is presented. This model combines a chemical-transport model (CTM) that includes a comprehensive treatment of atmospheric chemistry and transport at spatial scales greater than 1 km and a street-network model that describes the atmospheric concentrations of pollutants in an urban street network. The street-network model is based on the general formulation of the SIRANE model and consists of two main components: a street-canyon component and a street-intersection component. The street-canyon component calculates the mass transfer velocity at the top of the street canyon (roof top) and the mean wind velocity within the street canyon. The estimation of the mass transfer velocity depends on the intensity of the standard deviation of the vertical velocity at roof top. The effect of various formulations of this mass transfer velocity on the pollutant transport at roof-top level is examined. The street-intersection component calculates the mass transfer from a given street to other streets across the intersection. These mass transfer rates among the streets are calculated using the mean wind velocity calculated for each street and are balanced so that the total incoming flow rate is equal to the total outgoing flow rate from the intersection including the flow between the intersection and the overlying atmosphere at roof top. In the default option, the Leighton photostationary cycle among ozone (O3) and nitrogen oxides (NO and NO2) is used to represent the chemical reactions within the street network. However, the influence of volatile organic compounds (VOC) on the pollutant concentrations increases when the nitrogen oxides (NOx) concentrations are low. To account for the possible VOC influence on street-canyon chemistry, the CB05 chemical kinetic mechanism, which includes 35 VOC model species, is implemented in this street-network model. A sensitivity study is conducted to assess the uncertainties associated with the use of the Leighton cycle chemistry. The street-network model is coupled to the CTM Polair3D of the Polyphemus air quality modeling platform to constitute a Street-in-Grid (SinG) model. The street-network model is used to simulate the concentrations of the chemical species in the lowest layer in the urban area and the simulation for the upper layers is then performed by Polair3D. Interactions between the street-network model and the host CTM occur at roof-top and depend on the vertical mass transfer described above. The SinG model is used to simulate the concentrations of gas-phase pollutants (O3 and NOx) in a Paris suburb. The emission data for each street that are needed for the street-network model were obtained from a dynamic traffic model. Topographic data, such as street length/width and building height, were obtained from a geographic database (BD TOPO). Simulated concentrations are compared to concentrations measured at two monitoring stations that were located on each side of a large avenue.
Christensen, Jette; Stryhn, Henrik; Vallières, André; El Allaki, Farouk
2011-05-01
In 2008, Canada designed and implemented the Canadian Notifiable Avian Influenza Surveillance System (CanNAISS) with six surveillance activities in a phased-in approach. CanNAISS was a surveillance system because it had more than one surveillance activity or component in 2008: passive surveillance; pre-slaughter surveillance; and voluntary enhanced notifiable avian influenza surveillance. Our objectives were to give a short overview of two active surveillance components in CanNAISS; describe the CanNAISS scenario tree model and its application to estimation of probability of populations being free of NAI virus infection and sample size determination. Our data from the pre-slaughter surveillance component included diagnostic test results from 6296 serum samples representing 601 commercial chicken and turkey farms collected from 25 August 2008 to 29 January 2009. In addition, we included data from a sub-population of farms with high biosecurity standards: 36,164 samples from 55 farms sampled repeatedly over the 24 months study period from January 2007 to December 2008. All submissions were negative for Notifiable Avian Influenza (NAI) virus infection. We developed the CanNAISS scenario tree model, so that it will estimate the surveillance component sensitivity and the probability of a population being free of NAI at the 0.01 farm-level and 0.3 within-farm-level prevalences. We propose that a general model, such as the CanNAISS scenario tree model, may have a broader application than more detailed models that require disease specific input parameters, such as relative risk estimates. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Effect of Virtual Analytical Chemistry Laboratory on Enhancing Student Research Skills and Practices
ERIC Educational Resources Information Center
Bortnik, Boris; Stozhko, Natalia; Pervukhina, Irina; Tchernysheva, Albina; Belysheva, Galina
2017-01-01
This article aims to determine the effect of a virtual chemistry laboratory on university student achievement. The article describes a model of a laboratory course that includes a virtual component. This virtual component is viewed as a tool of student pre-lab autonomous learning. It presents electronic resources designed for a virtual laboratory…
Yang, Chenghu; Liu, Yangzhi; Cen, Qiulin; Zhu, Yaxian; Zhang, Yong
2018-02-01
The heterogeneous adsorption behavior of commercial humic acid (HA) on pristine and functionalized multi-walled carbon nanotubes (MWCNTs) was investigated by fluorescence excitation-emission matrix and parallel factor (EEM- PARAFAC) analysis. The kinetics, isotherms, thermodynamics and mechanisms of adsorption of HA fluorescent components onto MWCNTs were the focus of the present study. Three humic-like fluorescent components were distinguished, including one carboxylic-like fluorophore C1 (λ ex /λ em = (250, 310) nm/428nm), and two phenolic-like fluorophores, C2 (λ ex /λ em = (300, 460) nm/552nm) and C3 (λ ex /λ em = (270, 375) nm/520nm). The Lagergren pseudo-second-order model can be used to describe the adsorption kinetics of the HA fluorescent components. In addition, both the Freundlich and Langmuir models can be suitably employed to describe the adsorption of the HA fluorescent components onto MWCNTs with significantly high correlation coefficients (R 2 > 0.94, P< 0.05). The dissimilarity in the adsorption affinity (K d ) and nonlinear adsorption degree from the HA fluorescent components to MWCNTs was clearly observed. The adsorption mechanism suggested that the π-π electron donor-acceptor (EDA) interaction played an important role in the interaction between HA fluorescent components and the three MWCNTs. Furthermore, the values of the thermodynamic parameters, including the Gibbs free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°), showed that the adsorption of the HA fluorescent components on MWCNTs was spontaneous and exothermic. Copyright © 2017 Elsevier Inc. All rights reserved.
Taylor, John; Hall, Deborah A.; Walker, Dawn-Marie; McMurran, Mary; Casey, Amanda; Stockdale, David; Featherstone, Debbie; Hoare, Derek J.
2018-01-01
Objectives: The aim of this study was to determine which components of psychological therapies are most important and appropriate to inform audiologists’ usual care for people with tinnitus. Design: A 39-member panel of patients, audiologists, hearing therapists, and psychologists completed a three-round Delphi survey to reach consensus on essential components of audiologist-delivered psychologically informed care for tinnitus. Results: Consensus (≥80% agreement) was reached on including 76 of 160 components. No components reached consensus for exclusion. The components reaching consensus were predominantly common therapeutic skills such as Socratic questioning and active listening, rather than specific techniques, for example, graded exposure therapy or cognitive restructuring. Consensus on educational components to include largely concerned psychological models of tinnitus rather than neurophysiological information. Conclusions: The results of this Delphi survey provide a tool to develop audiologists’ usual tinnitus care using components that both patients and clinicians agree are important and appropriate to be delivered by an audiologist for adults with tinnitus-related distress. Research is now necessary to test the added effects of these components when delivered by audiologists. PMID:28930785
NASA Technical Reports Server (NTRS)
Bregman, Joel N.; Hogg, David E.; Roberts, Morton S.
1992-01-01
Interstellar components of early-type galaxies are established by galactic type and luminosity in order to search for relationships between the different interstellar components and to test the predictions of theoretical models. Some of the data include observations of neutral hydrogen, carbon monoxide, and radio continuum emission. An alternative distance model which yields LX varies as LB sup 2.45, a relation which is in conflict with simple cooling flow models, is discussed. The dispersion of the X-ray luminosity about this regression line is unlikely to result from stripping. The striking lack of clear correlations between hot and cold interstellar components, taken together with their morphologies, suggests that the cold gas is a disk phenomenon while the hot gas is a bulge phenomenon, with little interaction between the two. The progression of galaxy type from E to Sa is not only a sequence of decreasing stellar bulge-to-disk ratio, but also of hot-to-cold-gas ratio.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
NASA Technical Reports Server (NTRS)
Humphreys, B. T.; Thompson, W. K.; Lewandowski, B. E.; Cadwell, E. E.; Newby, N. J.; Fincke, R. S.; Sheehan, C.; Mulugeta, L.
2012-01-01
NASA's Digital Astronaut Project (DAP) implements well-vetted computational models to predict and assess spaceflight health and performance risks, and enhance countermeasure development. DAP provides expertise and computation tools to its research customers for model development, integration, or analysis. DAP is currently supporting the NASA Exercise Physiology and Countermeasures (ExPC) project by integrating their biomechanical models of specific exercise movements with dynamic models of the devices on which the exercises were performed. This presentation focuses on the development of a high fidelity dynamic module of the Advanced Resistive Exercise Device (ARED) on board the ISS. The ARED module, illustrated in the figure below, was developed using the Adams (MSC Santa Ana, California) simulation package. The Adams package provides the capabilities to perform multi rigid body, flexible body, and mixed dynamic analyses of complex mechanisms. These capabilities were applied to accurately simulate: Inertial and mass properties of the device such as the vibration isolation system (VIS) effects and other ARED components, Non-linear joint friction effects, The gas law dynamics of the vacuum cylinders and VIS components using custom written differential state equations, The ARED flywheel dynamics, including torque limiting clutch. Design data from the JSC ARED Engineering team was utilized in developing the model. This included solid modeling geometry files, component/system specifications, engineering reports and available data sets. The Adams ARED module is importable into LifeMOD (Life Modeler, Inc., San Clemente, CA) for biomechanical analyses of different resistive exercises such as squat and dead-lift. Using motion capture data from ground test subjects, the ExPC developed biomechanical exercise models in LifeMOD. The Adams ARED device module was then integrated with the exercise subject model into one integrated dynamic model. This presentation will describe the development of the Adams ARED module including its capabilities, limitations, and assumptions. Preliminary results, validation activities, and a practical application of the module to inform the relative effect of the flywheels on exercise will be discussed.
Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif
2014-11-01
Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less
Modeling the impact behavior of high strength ceramics. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajendran, A.M.
1993-12-01
An advanced constitutive model is used to describe the shock and high strain rate behaviors of silicon carbide (SC), boron carbide B4C, and titanium diboride (TiB2) under impact loading conditions. The model's governing equations utilize a set of microphysically-based constitutive relationships to model the deformation and damage processes in a ceramic. The total strain is decomposed into elastic, plastic, and microcracking components. The plastic strain component was calculated using conventional viscoplastic equations. The strain components due to microcracking utilized relationships derived for a penny-shaped crack containing elastic solids. The main features of the model include degradation of strength and stiffnessmore » under both compressive and tensile loading conditions. When loaded above the Hugoniot elastic limit (HEL), the strength is limited by the strain rate dependent strength equation. However, below the HEL, the strength variation with respect to strain rate and pressure is modeled through microcracking relationships assuming no plastic flow. The ceramic model parameters were determined using a set of VISAR data from the plate impact experiments.« less
Olivares, Pedro R; García-Rubio, Javier
2016-01-01
To analyze the associations between different components of fitness and fatness with academic performance, adjusting the analysis by sex, age, socio-economic status, region and school type in a Chilean sample. Data of fitness, fatness and academic performance was obtained from the Chilean System for the Assessment of Educational Quality test for eighth grade in 2011 and includes a sample of 18,746 subjects (49% females). Partial correlations adjusted by confounders were done to explore association between fitness and fatness components, and between the academic scores. Three unadjusted and adjusted linear regression models were done in order to analyze the associations of variables. Fatness has a negative association with academic performance when Body Mass Index (BMI) and Waist to Height Ratio (WHR) are assessed independently. When BMI and WHR are assessed jointly and adjusted by cofounders, WHR is more associated with academic performance than BMI, and only the association of WHR is positive. For fitness components, strength was the variable most associated with the academic performance. Cardiorespiratory capacity was not associated with academic performance if fatness and other fitness components are included in the model. Fitness and fatness are associated with academic performance. WHR and strength are more related with academic performance than BMI and cardiorespiratory capacity.
2016-01-01
Objectives To analyze the associations between different components of fitness and fatness with academic performance, adjusting the analysis by sex, age, socio-economic status, region and school type in a Chilean sample. Methods Data of fitness, fatness and academic performance was obtained from the Chilean System for the Assessment of Educational Quality test for eighth grade in 2011 and includes a sample of 18,746 subjects (49% females). Partial correlations adjusted by confounders were done to explore association between fitness and fatness components, and between the academic scores. Three unadjusted and adjusted linear regression models were done in order to analyze the associations of variables. Results Fatness has a negative association with academic performance when Body Mass Index (BMI) and Waist to Height Ratio (WHR) are assessed independently. When BMI and WHR are assessed jointly and adjusted by cofounders, WHR is more associated with academic performance than BMI, and only the association of WHR is positive. For fitness components, strength was the variable most associated with the academic performance. Cardiorespiratory capacity was not associated with academic performance if fatness and other fitness components are included in the model. Conclusions Fitness and fatness are associated with academic performance. WHR and strength are more related with academic performance than BMI and cardiorespiratory capacity. PMID:27761345
NASA Astrophysics Data System (ADS)
Sin, Kuek Jia; Cheong, Chin Wen; Hooi, Tan Siow
2017-04-01
This study aims to investigate the crude oil volatility using a two components autoregressive conditional heteroscedasticity (ARCH) model with the inclusion of abrupt jump feature. The model is able to capture abrupt jumps, news impact, clustering volatility, long persistence volatility and heavy-tailed distributed error which are commonly observed in the crude oil time series. For the empirical study, we have selected the WTI crude oil index from year 2000 to 2016. The results found that by including the multiple-abrupt jumps in ARCH model, there are significant improvements of estimation evaluations as compared with the standard ARCH models. The outcomes of this study can provide useful information for risk management and portfolio analysis in the crude oil markets.
NASA Technical Reports Server (NTRS)
1976-01-01
The design, fabrication tests, and engineering model components of a 10.6 mum wideband transceiver system are reported. The effort emphasized the transmitter subsystem, including the development of the laser, the modulator driver, and included productization of both the transmitter and local oscillator lasers. The transmitter subsystem is functionally compatible with the receiver engineering model terminal, and has undergone high data rate communication system testing against that terminal.
James E. Smith; Linda S. Heath
2015-01-01
Our approach is based on a collection of models that convert or augment the USDA Forest Inventory and Analysis program survey data to estimate all forest carbon component stocks, including live and standing dead tree aboveground and belowground biomass, forest floor (litter), down deadwood, and soil organic carbon, for each inventory plot. The data, which include...
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.
2013-12-01
Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.
THE 2006 CMAQ RELEASE AND PLANS FOR 2007
The 2006 release of the Community Multiscale Air Quality (CMAQ) model (Version 4.6) includes upgrades to several model components as well as new modules for gas-phase chemistry and boundary layer mixing. Capabilities for simulation of hazardous air pollutants have been expanded ...
A STUDY OF GAS-PHASE MERCURY SPECIATION USING DETAILED CHEMICAL KINETICS
Mercury (Hg) speciation in combustion-generated flue gas is modeled using a detailed chemical mechanism consisting of 60 reactions and 21 species. This speciation model accounts for chlorination and oxidation of key flue-gas components, including elemental mercury. Results indica...
Classical least squares multivariate spectral analysis
Haaland, David M.
2002-01-01
An improved classical least squares multivariate spectral analysis method that adds spectral shapes describing non-calibrated components and system effects (other than baseline corrections) present in the analyzed mixture to the prediction phase of the method. These improvements decrease or eliminate many of the restrictions to the CLS-type methods and greatly extend their capabilities, accuracy, and precision. One new application of PACLS includes the ability to accurately predict unknown sample concentrations when new unmodeled spectral components are present in the unknown samples. Other applications of PACLS include the incorporation of spectrometer drift into the quantitative multivariate model and the maintenance of a calibration on a drifting spectrometer. Finally, the ability of PACLS to transfer a multivariate model between spectrometers is demonstrated.
[Computer aided design and rapid manufacturing of removable partial denture frameworks].
Han, Jing; Lü, Pei-jun; Wang, Yong
2010-08-01
To introduce a method of digital modeling and fabricating removable partial denture (RPD) frameworks using self-developed software for RPD design and rapid manufacturing system. The three-dimensional data of two partially dentate dental casts were obtained using a three-dimensional crossing section scanner. Self-developed software package for RPD design was used to decide the path of insertion and to design different components of RPD frameworks. The components included occlusal rest, clasp, lingual bar, polymeric retention framework and maxillary major connector. The design procedure for the components was as following: first, determine the outline of the component. Second, build the tissue surface of the component using the scanned data within the outline. Third, preset cross section was used to produce the polished surface. Finally, different RPD components were modeled respectively and connected by minor connectors to form an integrated RPD framework. The finished data were imported into a self-developed selective laser melting (SLM) machine and metal frameworks were fabricated directly. RPD frameworks for the two scanned dental casts were modeled with this self-developed program and metal RPD frameworks were successfully fabricated using SLM method. The finished metal frameworks fit well on the plaster models. The self-developed computer aided design and computer aided manufacture (CAD-CAM) system for RPD design and fabrication has completely independent intellectual property rights. It provides a new method of manufacturing metal RPD frameworks.
Advanced nozzle and engine components test facility
NASA Technical Reports Server (NTRS)
Beltran, Luis R.; Delroso, Richard L.; Delrosario, Ruben
1992-01-01
A test facility for conducting scaled advanced nozzle and engine component research is described. The CE-22 test facility, located in the Engine Research Building of the NASA Lewis Research Center, contains many systems for the economical testing of advanced scale-model nozzles and engine components. The combustion air and altitude exhaust systems are described. Combustion air can be supplied to a model up to 40 psig for primary air flow, and 40, 125, and 450 psig for secondary air flow. Altitude exhaust can be simulated up to 48,000 ft, or the exhaust can be atmospheric. Descriptions of the multiaxis thrust stand, a color schlieren flow visualization system used for qualitative flow analysis, a labyrinth flow measurement system, a data acquisition system, and auxiliary systems are discussed. Model recommended design information and temperature and pressure instrumentation recommendations are included.
A Model for Evaluating Programs for the Gifted under Non-Experimental Conditions.
ERIC Educational Resources Information Center
Carter, Kyle R.
1992-01-01
The article presents and illustrates use of an evaluation model for assessing programs for the gifted where tight experimental control is not possible. The model consists of four components: ex post factor designs including intact groups; comparative evaluation; strength of treatment; and multiple outcome assessment from flexible data sources. (DB)
Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production
ERIC Educational Resources Information Center
Guenther, Frank H.; Ghosh, Satrajit S.; Tourville, Jason A.
2006-01-01
This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and…
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Learning Tasks, Peer Interaction, and Cognition Process: An Online Collaborative Design Model
ERIC Educational Resources Information Center
Du, Jianxia; Durrington, Vance A.
2013-01-01
This paper illustrates a model for Online Group Collaborative Learning. The authors based the foundation of the Online Collaborative Design Model upon Piaget's concepts of assimilation and accommodation, and Vygotsky's theory of social interaction. The four components of online collaborative learning include: individual processes, the task(s)…
Alegado, Rosanna A; Campbell, Marianne C; Chen, Will C; Slutz, Sandra S; Tan, Man-Wah
2003-07-01
The soil-borne nematode, Caenorhabditis elegans, is emerging as a versatile model in which to study host-pathogen interactions. The worm model has shown to be particularly effective in elucidating both microbial and animal genes involved in toxin-mediated killing. In addition, recent work on worm infection by a variety of bacterial pathogens has shown that a number of virulence regulatory genes mediate worm susceptibility. Many of these regulatory genes, including the PhoP/Q two-component regulators in Salmonella and LasR in Pseudomonas aeruginosa, have also been implicated in mammalian models suggesting that findings in the worm model will be relevant to other systems. In keeping with this concept, experiments aimed at identifying host innate immunity genes have also implicated pathways that have been suggested to play a role in plants and animals, such as the p38 MAP kinase pathway. Despite rapid forward progress using this model, much work remains to be done including the design of more sensitive methods to find effector molecules and further characterization of the exact interaction between invading pathogens and C. elegans' cellular components.
The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.
Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot
2006-05-01
Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, P.; Eurek, K.; Margolis, R.
2014-07-01
Because solar power is a rapidly growing component of the electricity system, robust representations of solar technologies should be included in capacity-expansion models. This is a challenge because modeling the electricity system--and, in particular, modeling solar integration within that system--is a complex endeavor. This report highlights the major challenges of incorporating solar technologies into capacity-expansion models and shows examples of how specific models address those challenges. These challenges include modeling non-dispatchable technologies, determining which solar technologies to model, choosing a spatial resolution, incorporating a solar resource assessment, and accounting for solar generation variability and uncertainty.
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina
2016-04-01
Oil biodegradation by native bacteria is one of the most important natural processes that can attenuate the environmental impacts of marine oil spills. However, very few numerical models of oil spill fate and transport include biodegradation kinetics of spilled oil. Furthermore, in models where biodegradation is included amongst the oil transformation processes simulated, it is mostly represented as a first order decay process neglecting the effect of several important parameters that can limit biodegradation rate, such as oil composition and oil droplets-water interface. To this end, the open source numerical model MEDSKIL-II, which simulates oil spill fate and transport in the marine environment, has been modified to include biodegradation kinetics of oil droplets dispersed in the water column. MEDSLIK-II predicts the transport and weathering of oil spills following a Lagrangian approach for the solution of the advection-diffusion equation. Transport is governed by the 3D sea currents and wave field provided by ocean circulation models. In addition to advective and diffusive displacements, the model simulates several physical and chemical processes that transform the oil (evaporation, emulsification, dispersion in the water column, adhesion to coast). The fate algorithms employed in MEDSLIK-II consider the oil as a uniform substance whose properties change as the slick weathers, an approach that can lead to reduced accuracy, especially in the estimation of oil evaporation and biodegradation. Therefore MEDSLIK-II has been modified by adopting the "pseudo-component" approach for simulating weathering processes. Spilled oil is modelled as a relatively small number of discrete, non-interacting components (pseudo-components). Chemicals in the oil mixture are grouped by physical-chemical properties and the resulting pseudo-component behaves as if it were a single substance with characteristics typical of the chemical group. The fate (evaporation, dispersion, biodegradation) of each component is tracked separately. Biodegradation of oil droplets is modelled by Monod kinetics. The kinetics of oil particles size reduction due to the microbe-mediated degradation at water-oil particle interface is represented by the shrinking core model. In order to test the performance of the modified MEDSLIK-II model, it has been applied to a test case built-in the original code. The total fate of the oil spill is simulated both without biodegradation kinetics and when biodegradation is taken into account, for reasons of comparison. Several parameters that control biodegradation rate, including initial oil concentration and composition, size distribution of oil droplets and initial microbial concentration have been investigated. This upgraded version of MEDSLIK-II can be useful not only for predicting the transport and fate of spilled oil in the short term but also for evaluating different bioremediation strategies and risk assessment for the mid- and long term. Acknowledgements: The financial support by the EU project DECATASTROPHIZE: Use of SDSS and MCDA to Prepare for Disasters or Plan for Multiple Hazards, GA no. ECHO/SUB/2015/713788/PREP02, is greatly acknowledged.
NASA Astrophysics Data System (ADS)
Li, Z.
2003-12-01
Application of GIS and visualization technology significantly contributes to the efficiency and success of developing ground-water models in the Twentynine Palms and San Jose areas, California. Visualizations from GIS and other tools can help to formulate the conceptual model by quickly revealing the basinwide geohydrologic characteristics and changes of a ground-water flow system, and by identifying the most influential components of system dynamics. In addition, 3-D visualizations and animations can help validate the conceptual formulation and the numerical calibration of the model by checking for model-input data errors, revealing cause and effect relationships, and identifying hidden design flaws in model layering and other critical flow components. Two case studies will be presented: The first is a desert basin (near the town of Twentynine Palms) characterized by a fault-controlled ground-water flow system. The second is a coastal basin (Santa Clara Valley including the city of San Jose) characterized by complex, temporally variable flow components ¦ including artificial recharge through a large system of ponds and stream channels, dynamically changing inter-layer flow from hundreds of multi-aquifer wells, pumping-driven subsidence and recovery, and climatically variable natural recharge. For the Twentynine Palms area, more than 10,000 historical ground-water level and water-quality measurements were retrieved from the USGS databases. The combined use of GIS and visualization tools allowed these data to be swiftly organized and interpreted, and depicted by water-level and water-quality maps with a variety of themes for different uses. Overlaying and cross-correlating these maps with other hydrological, geological, geophysical, and geochemical data not only helped to quickly identify the major geohydrologic characteristics controlling the natural variation of hydraulic head in space, such as faults, basin-bottom altitude, and aquifer stratigraphies, but also helped to identify the temporal changes induced by human activities, such as pumping. For the San Jose area, a regional-scale ground-water/surface-water flow model was developed with 6 model layers, 360 monthly stress periods, and complex flow components. The model was visualized by creating animations for both hydraulic head and land subsidence. Cell-by-cell flow of individual flow components was also animated. These included simulated infiltration from climatically variable natural recharge, interlayer flow through multi-aquifer well bores, flow gains and losses along stream channels, and storage change in response to system recharge and discharge. These animations were used to examine consistency with other independent observations, such as measured water-level distribution, mapped gaining and losing stream reaches, and INSAR-interpreted subsidence and uplift. In addition, they revealed enormous detail on the spatial and temporal variation of both individual flow components as well as the entire flow system, and thus significantly increased understanding of system dynamics and improved the accuracy of model simulations.
NASA Technical Reports Server (NTRS)
Fields, Christina M.
2013-01-01
The Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI) is responsible for providing simulations to support test and verification of SCCS hardware and software. The Universal Coolant Transporter System (UCTS) was a Space Shuttle Orbiter support piece of the Ground Servicing Equipment (GSE). The initial purpose of the UCTS was to provide two support services to the Space Shuttle Orbiter immediately after landing at the Shuttle Landing Facility. The UCTS is designed with the capability of servicing future space vehicles; including all Space Station Requirements necessary for the MPLM Modules. The Simulation uses GSE Models to stand in for the actual systems to support testing of SCCS systems during their development. As an intern at Kennedy Space Center (KSC), my assignment was to develop a model component for the UCTS. I was given a fluid component (dryer) to model in Simulink. I completed training for UNIX and Simulink. The dryer is a Catch All replaceable core type filter-dryer. The filter-dryer provides maximum protection for the thermostatic expansion valve and solenoid valve from dirt that may be in the system. The filter-dryer also protects the valves from freezing up. I researched fluid dynamics to understand the function of my component. The filter-dryer was modeled by determining affects it has on the pressure and velocity of the system. I used Bernoulli's Equation to calculate the pressure and velocity differential through the dryer. I created my filter-dryer model in Simulink and wrote the test script to test the component. I completed component testing and captured test data. The finalized model was sent for peer review for any improvements. I participated in Simulation meetings and was involved in the subsystem design process and team collaborations. I gained valuable work experience and insight into a career path as an engineer.
Improvement and extension of a radar forest backscattering model
NASA Technical Reports Server (NTRS)
Simonett, David S.; Wang, Yong
1989-01-01
Radar modeling of mangal forest stands, in the Sundarbans area of Southern Bangladesh, was developed. The modeling employs radar system parameters such as wavelength, polarization, and incidence angle, with forest data on tree height, spacing, biomass, species combinations, and water content (including slightly conductive water) both in leaves and trunks of the mangal. For Sundri and Gewa tropical mangal forests, five model components are proposed, which are required to explain the contributions of various forest species combinations in the attenuation and scattering of mangal vegetated nonflooded or flooded surfaces. Statistical data of simulated images (HH components only) were compared with those of SIR-B images both to refine the modeling procedures and to appropriately characterize the model output. The possibility of delineation of flooded or non-flooded boundaries is discussed.
NASA Astrophysics Data System (ADS)
Goteti, G.; Kaheil, Y. H.; Katz, B. G.; Li, S.; Lohmann, D.
2011-12-01
In the United States, government agencies as well as the National Flood Insurance Program (NFIP) use flood inundation maps associated with the 100-year return period (base flood elevation, BFE), produced by the Federal Emergency Management Agency (FEMA), as the basis for flood insurance. A credibility check of the flood risk hydraulic models, often employed by insurance companies, is their ability to reasonably reproduce FEMA's BFE maps. We present results from the implementation of a flood modeling methodology aimed towards reproducing FEMA's BFE maps at a very fine spatial resolution using a computationally parsimonious, yet robust, hydraulic model. The hydraulic model used in this study has two components: one for simulating flooding of the river channel and adjacent floodplain, and the other for simulating flooding in the remainder of the catchment. The first component is based on a 1-D wave propagation model, while the second component is based on a 2-D diffusive wave model. The 1-D component captures the flooding from large-scale river transport (including upstream effects), while the 2-D component captures the flooding from local rainfall. The study domain consists of the contiguous United States, hydrologically subdivided into catchments averaging about 500 km2 in area, at a spatial resolution of 30 meters. Using historical daily precipitation data from the Climate Prediction Center (CPC), the precipitation associated with the 100-year return period event was computed for each catchment and was input to the hydraulic model. Flood extent from the FEMA BFE maps is reasonably replicated by the 1-D component of the model (riverine flooding). FEMA's BFE maps only represent the riverine flooding component and are unavailable for many regions of the USA. However, this modeling methodology (1-D and 2-D components together) covers the entire contiguous USA. This study is part of a larger modeling effort from Risk Management Solutions° (RMS) to estimate flood risk associated with extreme precipitation events in the USA. Towards this greater objective, state-of-the-art models of flood hazard and stochastic precipitation are being implemented over the contiguous United States. Results from the successful implementation of the modeling methodology will be presented.
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
NASA Astrophysics Data System (ADS)
Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan
2017-04-01
Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.
Carbon Storage in Urban Areas in the USA
NASA Astrophysics Data System (ADS)
Churkina, G.; Brown, D.; Keoleian, G.
2007-12-01
It is widely accepted that human settlements occupy a small proportion of the landmass and therefore play a relatively small role in the dynamics of the global carbon cycle. Most modeling studies focusing on the land carbon cycle use models of varying complexity to estimate carbon fluxes through forests, grasses, and croplands, but completely omit urban areas from their scope. Here, we estimate carbon storage in urban areas within the United States, defined to encompass a range of observed settlement densities, and its changes from 1950 to 2000. We show that this storage is not negligible and has been continuously increasing. We include natural- and human-related components of urban areas in our estimates. The natural component includes carbon storage in urban soil and vegetation. The human related component encompasses carbon stored long term in buildings, furniture, cars, and waste. The study suggests that urban areas should receive continued attention in efforts to accurately account for carbon uptake and storage in terrestrial systems.
Snyder, Daniel T.; Brownell, Dorie L.
1996-01-01
Suggestions for further study include (1) evaluation of the surface-runoff component of inflow to the lake; (2) use of a cross-sectional ground-water flow model to estimate ground-water inflow, outflow, and storage; (3) additional data collection to reduce the uncertainties of the hydrologic components that have large relative uncertainties; and (4) determination of long-term trends for a wide range of climatic and hydrologic conditions.
Wall, Michael E; Van Benschoten, Andrew H; Sauter, Nicholas K; Adams, Paul D; Fraser, James S; Terwilliger, Thomas C
2014-12-16
X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculations of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. Decomposition of the MD model into protein and solvent components indicates that protein-solvent interactions contribute substantially to the overall diffuse intensity. We conclude that diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
A novel energy recovery system for parallel hybrid hydraulic excavator.
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented.
A Novel Energy Recovery System for Parallel Hybrid Hydraulic Excavator
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented. PMID:25405215
Muir, W M; Howard, R D
2001-07-01
Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.
Ke, A; Barter, Z; Rowland‐Yeo, K
2016-01-01
In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752
Siegel, Jason T; Alvaro, Eusebio M; Tan, Cara N; Navarro, Mario A; Garner, Lori R; Jones, Sara Pace
2016-06-01
Approximately 22 people die each day in the United States as a result of the shortage of transplantable organs. This is particularly problematic among Spanish-dominant Hispanics. Increasing the number of registered organ donors can reduce this deficit. The goal of the current set of studies was to conceptually replicate a prior study indicating the lack of utility of a lone, immediate and complete registration opportunity (ICRO). The study, a quasi-experimental design involving a total of 4 waves of data collection, was conducted in 2 different Mexican consulates in the United States. Guided by the IIFF Model (ie, an ICRO, information, focused engagement, and favorable activation), each wave compared a lone ICRO to a condition that likewise included an ICRO but also included the 3 additional intervention components recommended by the model (ie, information, focused engagement, and favorable activation). Visitors to the Mexican consulates in Tucson, Arizona, and Albuquerque, New Mexico, constituted the participant pool. New organ donor registrations represented the dependent variable. When all 4 components of the IIFF Model were present, approximately 4 registrations per day were recorded; the lone ICRO resulted in approximately 1 registration every 15 days. An ICRO, without the other components of the IIFF Model, is of minimal use in regard to garnering organ donor registrations. Future studies should use the IIFF Model to consider how the utility of ICROs can be maximized. © 2016, NATCO.
Walss-Bass, Consuelo; Suchting, Robert; Olvera, Rene L; Williamson, Douglas E
2018-07-01
Immune system abnormalities have been repeatedly observed in several psychiatric disorders, including severe depression and anxiety. However, whether specific immune mediators play an early role in the etiopathogenesis of these disorders remains unknown. In a longitudinal design, component-wise gradient boosting was used to build models of depression, assessed by the Mood-Feelings Questionnaire-Child (MFQC), and anxiety, assessed by the Screen for Child Anxiety Related Emotional Disorders (SCARED) in 254 adolescents from a large set of candidate predictors, including sex, race, 39 inflammatory proteins, and the interactions between those proteins and time. Each model was reduced via backward elimination to maximize parsimony and generalizability. Component-wise gradient boosting and model reduction found that female sex, growth- regulated oncogene (GRO), and transforming growth factor alpha (TGF-alpha) predicted depression, while female sex predicted anxiety. Differential onset of puberty as well as a lack of control for menstrual cycle may also have been responsible for differences between males and females in the present study. In addition, investigation of all possible nonlinear relationships between the predictors and the outcomes was beyond the computational capacity and scope of the present research. This study highlights the need for novel statistical modeling to identify reliable biological predictors of aberrant psychological behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.
2017-12-01
Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of bytes to be read as the number of computations it performs. When several components' executions are coordinated the overall arithmetic intensity increases, leading to increased efficiency.
NASA Technical Reports Server (NTRS)
Mennell, R. C.
1973-01-01
Experimental aerodynamic investigations were conducted in a low speed wind tunnel on an 0.0405 scale representation of the 89A light weight Space Shuttle Orbiter to obtain pressure loads data in the presence of the ground for orbiter structural strength analysis. The model and the facility are described, and data reduction is outlined. Tables are included for data set/run number collation, data set/component collation, model component description, and pressure tap locations by series number. Tabulated force and pressure source data are presented.
Quadratic integrand double-hybrid made spin-component-scaled
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika; Sancho-García, Juan C.
2016-03-28
We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.
Multipartite interacting scalar dark matter in the light of updated LUX data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Subhaditya; Ghosh, Purusottam; Poulose, Poulose, E-mail: subhab@iitg.ernet.in, E-mail: p.ghosh@iitg.ernet.in, E-mail: poulose@iitg.ernet.in
2017-04-01
We explore constraints on multipartite dark matter (DM) framework composed of singlet scalar DM interacting with the Standard Model (SM) through Higgs portal coupling. We compute relic density and direct search constraints including the updated LUX bound for two component scenario with non-zero interactions between two DM components in Z{sub 2} × Z{sub 2}{sup '} framework in comparison with the one having O(2) symmetry. We point out availability of a significantly large region of parameter space of such a multipartite model with DM-DM interactions.
Xu, Z C; Zhu, J
2000-01-01
According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.
A Process Model of Principal Selection.
ERIC Educational Resources Information Center
Flanigan, J. L.; And Others
A process model to assist school district superintendents in the selection of principals is presented in this paper. Components of the process are described, which include developing an action plan, formulating an explicit job description, advertising, assessing candidates' philosophy, conducting interview analyses, evaluating response to stress,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William E.; Siirola, John Daniel
We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.
Understanding electrostatic charge behaviour in aircraft fuel systems
NASA Astrophysics Data System (ADS)
Ogilvy, Jill A.; Hooker, Phil; Bennett, Darrell
2015-10-01
This paper presents work on the simulation of electrostatic charge build-up and decay in aircraft fuel systems. A model (EC-Flow) has been developed by BAE Systems under contract to Airbus, to allow the user to assess the effects of changes in design or in refuel conditions. Some of the principles behind the model are outlined. The model allows for a range of system components, including metallic and non-metallic pipes, valves, filters, junctions, bends and orifices. A purpose-built experimental rig was built at the Health and Safety Laboratory in Buxton, UK, to provide comparison data. The rig comprises a fuel delivery system, a test section where different components may be introduced into the system, and a Faraday Pail for measuring generated charge. Diagnostics include wall currents, charge densities and pressure losses. This paper shows sample results from the fitting of model predictions to measurement data and shows how analysis may be used to explain some of the observed trends.
Why involve families in acute mental healthcare? A collaborative conceptual review
Sandhu, Sima; Giacco, Domenico; Barrett, Katherine; Bennison, Gerry; Collinson, Sue; Priebe, Stefan
2017-01-01
Objectives Family involvement is strongly recommended in clinical guidelines but suffers from poor implementation. To explore this topic at a conceptual level, a multidisciplinary review team including academics, clinicians and individuals with lived experience undertook a review to explore the theoretical background of family involvement models in acute mental health treatment and how this relates to their delivery. Design A conceptual review was undertaken, including a systematic search and narrative synthesis. Included family models were mapped onto the most commonly referenced underlying theories: the diathesis–stress model, systems theories and postmodern theories of mental health. Common components of the models were summarised and compared. Lastly, a thematic analysis was undertaken to explore the role of patients and families in the delivery of the approaches. Setting General adult acute mental health treatment. Results Six distinct family involvement models were identified: Calgary Family Assessment and Intervention Models, ERIC (Equipe Rapide d’Intervention de Crise), Family Psychoeducation Models, Family Systems Approach, Open Dialogue and the Somerset Model. Findings indicated that despite wide variation in the theoretical models underlying family involvement models, there were many commonalities in their components, such as a focus on communication, language use and joint decision-making. Thematic analysis of the role of patients and families identified several issues for implementation. This included potential harms that could emerge during delivery of the models, such as imposing linear ‘patient–carer’ relationships and the risk of perceived coercion. Conclusions We conclude that future staff training may benefit from discussing the chosen family involvement model within the context of other theories of mental health. This may help to clarify the underlying purpose of family involvement and address the diverse needs and world views of patients, families and professionals in acute settings. PMID:28963308
ERIC Educational Resources Information Center
Rahayu, Sri; Sugiarto, Teguh; Madu, Ludiro; Holiawati; Subagyo, Ahmad
2017-01-01
This study aims to apply the model principal component analysis to reduce multicollinearity on variable currency exchange rate in eight countries in Asia against US Dollar including the Yen (Japan), Won (South Korea), Dollar (Hong Kong), Yuan (China), Bath (Thailand), Rupiah (Indonesia), Ringgit (Malaysia), Dollar (Singapore). It looks at yield…
RL10A-3-3A Rocket Engine Modeling Project
NASA Technical Reports Server (NTRS)
Binder, Michael; Tomsik, Thomas; Veres, Joseph P.
1997-01-01
Two RL10A-3-3A rocket engines comprise the main propulsion system for the Centaur upper stage vehicle. Centaur is used with bod Titan and Atlas launch vehicles, carrying military and civilian payloads from high altitudes into orbit and beyond. The RL10 has delivered highly reliable service for the past 30 years. Recently, however, there have been two in-flight failures which have refocused attention on the RL10. This heightened interest has sparked a desire for an independent RL10 modeling capability within NASA and th Air Force. Pratt & Whitney, which presently has the most detailed model of the RL10, also sees merit in having an independent model which could be used as a cross-check with their own simulations. The Space Propulsion Technology Division (SPTD) at the NASA Lewis Research Center has developed a computer model of the RL10A-3-3A. A project team was formed, consisting of experts in the areas of turbomachinery, combustion, and heat transfer. The overall goal of the project was to provide a model of the entire RL10 rocket engine for government use. In the course of the project, the major engine components have been modeled using a combination of simple correlations and detailed component analysis tools (computer codes). The results of these component analyses were verified with data provided by Pratt & Whitney. Select modeling results and test data curves were then integrated to form the RL10 engine system model The purpose of this report is to introduce the reader to the RL10 rocket engine and to describe the engine system model. The RL10 engine and its application to U.S. launch vehicles are described first, followed by a summary of the SPTD project organization, goals, and accomplishments. Simulated output from the system model are shown in comparison with test and flight data for start transient, steady state, and shut-down transient operations. Detailed descriptions of all component analyses, including those not selected for integration with the system model, are included as appendices.
Models of Workplace Incivility: The Relationships to Instigated Incivility and Negative Outcomes
2015-01-01
The aim of the study was to investigate workplace incivility as a social process, examining its components and relationships to both instigated incivility and negative outcomes in the form of well-being, job satisfaction, turnover intentions, and sleeping problems. The different components of incivility that were examined were experienced and witnessed incivility from coworkers as well as supervisors. In addition, the organizational factors, social support, control, and job demands, were included in the models. A total of 2871 (2058 women and 813 men) employees who were connected to the Swedish Hotel and Restaurant Workers Union completed an online questionnaire. Overall, the results from structural equation modelling indicate that whereas instigated incivility to a large extent was explained by witnessing coworker incivility, negative outcomes were to a high degree explained by experienced supervisor incivility via mediation through perceived low social support, low control, and high job demands. Unexpectedly, the relationships between incivility (experienced coworker and supervisor incivility, as well as witnessed supervisor incivility) and instigated incivility were moderated by perceived high control and high social support. The results highlight the importance of including different components of workplace incivility and organizational factors in future studies of the area. PMID:26557714
Verification of Ares I Liftoff Acoustic Environments via the Ares I Scale Model Acoustic Test
NASA Technical Reports Server (NTRS)
Counter, Douglas D.; Houston, Janice D.
2012-01-01
Launch environments, such as Liftoff Acoustic (LOA) and Ignition Overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA which are used in the development of the vibro-acoustic environments. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe to mitigate at the component level, reduction of the launch environments is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the predicted Ares I launch environments and to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. The test article included a 5% scale Ares I vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments. The ASMAT results are compared to the Ares I LOA predictions and water suppression effectiveness results are presented.
Verification of Ares I Liftoff Acoustic Environments via the Ares Scale Model Acoustic Test
NASA Technical Reports Server (NTRS)
Counter, Douglas D.; Houston, Janice D.
2012-01-01
Launch environments, such as Liftoff Acoustic (LOA) and Ignition Overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA which are used in the development of the vibro-acoustic environments. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe to mitigate at the component level, reduction of the launch environments is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the predicted Ares I launch environments and to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. The test article included a 5% scale Ares I vehicle model, tower and Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments. The ASMAT results are compared to the Ares I LOA predictions and water suppression effectiveness results are presented.
Models of Workplace Incivility: The Relationships to Instigated Incivility and Negative Outcomes.
Holm, Kristoffer; Torkelson, Eva; Bäckström, Martin
2015-01-01
The aim of the study was to investigate workplace incivility as a social process, examining its components and relationships to both instigated incivility and negative outcomes in the form of well-being, job satisfaction, turnover intentions, and sleeping problems. The different components of incivility that were examined were experienced and witnessed incivility from coworkers as well as supervisors. In addition, the organizational factors, social support, control, and job demands, were included in the models. A total of 2871 (2058 women and 813 men) employees who were connected to the Swedish Hotel and Restaurant Workers Union completed an online questionnaire. Overall, the results from structural equation modelling indicate that whereas instigated incivility to a large extent was explained by witnessing coworker incivility, negative outcomes were to a high degree explained by experienced supervisor incivility via mediation through perceived low social support, low control, and high job demands. Unexpectedly, the relationships between incivility (experienced coworker and supervisor incivility, as well as witnessed supervisor incivility) and instigated incivility were moderated by perceived high control and high social support. The results highlight the importance of including different components of workplace incivility and organizational factors in future studies of the area.
Inter-comparison of isotropic and anisotropic sea ice rheology in a fully coupled model
NASA Astrophysics Data System (ADS)
Roberts, A.; Cassano, J. J.; Maslowski, W.; Osinski, R.; Seefeldt, M. W.; Hughes, M.; Duvivier, A.; Nijssen, B.; Hamman, J.; Hutchings, J. K.; Hunke, E. C.
2015-12-01
We present the sea ice climate of the Regional Arctic System Model (RASM), using a suite of new physics available in the Los Alamos Sea Ice Model (CICE5). RASM is a high-resolution fully coupled pan-Arctic model that also includes the Parallel Ocean Program (POP), the Weather Research and Forecasting Model (WRF) and Variable Infiltration Capacity (VIC) land model. The model domain extends from ~45˚N to the North Pole and is configured to run at ~9km resolution for the ice and ocean components, coupled to 50km resolution atmosphere and land models. The baseline sea ice model configuration includes mushy-layer sea ice thermodynamics and level-ice melt ponds. Using this configuration, we compare the use of isotropic and anisotropic sea ice mechanics, and evaluate model performance using these two variants against observations including Arctic buoy drift and deformation, satellite-derived drift and deformation, and sea ice volume estimates from ICESat. We find that the isotropic rheology better approximates spatial patterns of thickness observed across the Arctic, but that both rheologies closely approximate scaling laws observed in the pack using buoys and RGPS data. A fundamental component of both ice mechanics variants, the so called Elastic-Viscous-Plastic (EVP) and Anisotropic-Elastic-Plastic (EAP), is that they are highly sensitive to the timestep used for elastic sub-cycling in an inertial-resolving coupled framework, and this has a significant affect on surface fluxes in the fully coupled framework.
Effects of changes along the risk chain on flood risk
NASA Astrophysics Data System (ADS)
Duha Metin, Ayse; Apel, Heiko; Viet Dung, Nguyen; Guse, Björn; Kreibich, Heidi; Schröter, Kai; Vorogushyn, Sergiy; Merz, Bruno
2017-04-01
Interactions of hydrological and socio-economic factors shape flood disaster risk. For this reason, assessment of flood risk ideally takes into account the whole flood risk chain from atmospheric processes, through the catchment and river system processes to the damage mechanisms in the affected areas. Since very different processes at various scales are interacting along the flood risk, the impact of the single components is rather unclear. However for flood risk management, it is required to know the controlling factor of flood damages. The present study, using the flood-prone Mulde catchment in Germany, discusses the sensitivity of flood risk to disturbances along the risk chain: How do disturbances propagate through the risk chain? How do different disturbances combine or conflict and affect flood risk? In this sensitivity analysis, the five components of the flood risk change are included. These are climate, catchment, river system, exposure and vulnerability. A model framework representing the complete risk chain is combined with observational data to understand how the sensitivities evolve along the risk chain by considering three plausible change scenarios for each of five components. The flood risk is calculated by using the Regional Flood Model (RFM) which is based on a continuous simulation approach, including rainfall-runoff, 1D river network, 2D hinterland inundation and damage estimation models. The sensitivity analysis covers more than 240 scenarios with different combinations of the five components. It is investigated how changes in different components affect risk indicators, such as the risk curve and expected annual damage (EAD). In conclusion, it seems that changes in exposure and vulnerability seem to outweigh changes in hazard.
NASA Astrophysics Data System (ADS)
Matveev, A.; Matthews, H. D.
2009-04-01
Carbon fluxes from land conversion are among the most uncertain variables in our understanding of the contemporary carbon cycle, which limits our ability to estimate both the total human contribution to current climate forcing and the net effect of terrestrial biosphere changes on atmospheric CO2 increases. The current generation of coupled climate-carbon models have made significant progress in simulating the coupled climate and carbon cycle response to anthropogenic CO2 emissions, but do not typically include land-use change as a dynamic component of the simulation. In this work we have incorporated a book-keeping land-use carbon accounting model into the University of Victoria Earth System Climate Model (UVic ESCM), and intermediate-complexity coupled climate-carbon model. The terrestrial component of the UVic ESCM allows an aerial competition of five plant functional types (PFTs) in response to climatic conditions and area availability, and tracks the associated changes in affected carbon pools. In order to model CO2 emissions from land conversion in the terrestrial component of the model, we calculate the allocation of carbon to short and long-lived wood products following specified land-cover change, and use varying decay timescales to estimate CO2 emissions. We use recently available spatial datasets of both crop and pasture distributions to drive a series of transient simulations and estimate the net contribution of human land-use change to historical carbon emissions and climate change.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
NASA Astrophysics Data System (ADS)
Jadhav, J. R.; Mantha, S. S.; Rane, S. B.
2015-06-01
The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.
Modifications of Hinge Mechanisms for the Mobile Launcher
NASA Technical Reports Server (NTRS)
Ganzak, Jacob D.
2018-01-01
The further development and modifications made towards the integration of the upper and lower hinge assemblies for the Exploration Upper Stage umbilical are presented. Investigative work is included to show the process of applying updated NASA Standards within component and assembly drawings for selected manufacturers. Component modifications with the addition of drawings are created to precisely display part geometries and geometric tolerances, along with proper methods of fabrication. Comparison of newly updated components with original Apollo era components is essential to correctly model the part characteristics and parameters, i.e. mass properties, material selection, weldments, and tolerances. 3-Dimensional modeling software is used to demonstrate the necessary improvements. In order to share and corroborate these changes, a document management system is used to store the various components and associated drawings. These efforts will contribute towards the Mobile Launcher for Exploration Mission 2 to provide proper rotation of the Exploration Upper Stage umbilical, necessary for providing cryogenic fill and drain capabilities.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
Nonlinear seismic analysis of a reactor structure impact between core components
NASA Technical Reports Server (NTRS)
Hill, R. G.
1975-01-01
The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
NASA Astrophysics Data System (ADS)
Ichinose, G. A.; Saikia, C. K.
2007-12-01
We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.
Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.
2015-01-01
The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Conceptual model of sediment processes in the upper Yuba River watershed, Sierra Nevada, CA
Curtis, J.A.; Flint, L.E.; Alpers, Charles N.; Yarnell, S.M.
2005-01-01
This study examines the development of a conceptual model of sediment processes in the upper Yuba River watershed; and we hypothesize how components of the conceptual model may be spatially distributed using a geographical information system (GIS). The conceptual model illustrates key processes controlling sediment dynamics in the upper Yuba River watershed and was tested and revised using field measurements, aerial photography, and low elevation videography. Field reconnaissance included mass wasting and channel storage inventories, assessment of annual channel change in upland tributaries, and evaluation of the relative importance of sediment sources and transport processes. Hillslope erosion rates throughout the study area are relatively low when compared to more rapidly eroding landscapes such as the Pacific Northwest and notable hillslope sediment sources include highly erodible andesitic mudflows, serpentinized ultramafics, and unvegetated hydraulic mine pits. Mass wasting dominates surface erosion on the hillslopes; however, erosion of stored channel sediment is the primary contributor to annual sediment yield. We used GIS to spatially distribute the components of the conceptual model and created hillslope erosion potential and channel storage models. The GIS models exemplify the conceptual model in that landscapes with low potential evapotranspiration, sparse vegetation, steep slopes, erodible geology and soils, and high road densities display the greatest hillslope erosion potential and channel storage increases with increasing stream order. In-channel storage in upland tributaries impacted by hydraulic mining is an exception. Reworking of stored hydraulic mining sediment in low-order tributaries continues to elevate upper Yuba River sediment yields. Finally, we propose that spatially distributing the components of a conceptual model in a GIS framework provides a guide for developing more detailed sediment budgets or numerical models making it an inexpensive way to develop a roadmap for understanding sediment dynamics at a watershed scale.
Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S
2016-12-01
Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.
Improved Cook-off Modeling of Multi-component Cast Explosives
NASA Astrophysics Data System (ADS)
Nichols, Albert
2017-06-01
In order to understand the hazards associated with energetic materials, it is important to understand their behavior in adverse thermal environments. These processes have been relatively well understood for solid explosives, however, the same cannot be said for multi-component melt-cast explosives. Here we describe the continued development of ALE3D, a coupled thermal/chemical/mechanical code, to improve its description of fluid explosives. The improved physics models include: 1) Chemical potential driven species segregation. This model allows us to model the complex flow fields associated with the melting and decomposing Comp-B, where the denser RDX tends to settle and the decomposing gasses rise, 2) Automatically scaled stream-wise diffusion model for thermal, species, and momentum diffusion. These models add sufficient numerical diffusion in the direction of flow to maintain numerical stability when the system is under resolved, as occurs for large systems. And 3) a slurry viscosity model, required to properly define the flow characteristics of the multi-component fluidized system. These models will be demonstrated on a simple Comp-B system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
A dermatotoxicokinetic model of human exposures to jet fuel.
Kim, David; Andersen, Melvin E; Nylander-French, Leena A
2006-09-01
Workers, both in the military and the commercial airline industry, are exposed to jet fuel by inhalation and dermal contact. We present a dermatotoxicokinetic (DTK) model that quantifies the absorption, distribution, and elimination of aromatic and aliphatic components of jet fuel following dermal exposures in humans. Kinetic data were obtained from 10 healthy volunteers following a single dose of JP-8 to the forearm over a surface area of 20 cm2. Blood samples were taken before exposure (t = 0 h), after exposure (t = 0.5 h), and every 0.5 h for up to 3.5 h postexposure. The DTK model that best fit the data included five compartments: (1) surface, (2) stratum corneum (SC), (3) viable epidermis, (4) blood, and (5) storage. The DTK model was used to predict blood concentrations of the components of JP-8 based on dermal-exposure measurements made in occupational-exposure settings in order to better understand the toxicokinetic behavior of these compounds. Monte Carlo simulations of dermal exposure and cumulative internal dose demonstrated no overlap among the low-, medium-, and high-exposure groups. The DTK model provides a quantitative understanding of the relationship between the mass of JP-8 components in the SC and the concentrations of each component in the systemic circulation. The model may be used for the development of a toxicokinetic modeling strategy for multiroute exposure to jet fuel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, themore » bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can enable decoupling of distinct motion factors in high-rank DVF measurements. This may improve motion model expressiveness and adaptability to on-board deformation, aiding model-based image reconstruction for target verification. NIH Grant No. R01-184173.« less
ERIC Educational Resources Information Center
Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling
2018-01-01
Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the…
OTLA: A New Model for Online Teaching, Learning and Assessment in Higher Education
ERIC Educational Resources Information Center
Ghilay, Yaron; Ghilay, Ruth
2013-01-01
The study examined a new asynchronous model for online teaching, learning and assessment, called OTLA. It is designed for higher-education institutions and is based on LMS (Learning Management System) as well as other relevant IT tools. The new model includes six digital basic components: text, hypertext, text reading, lectures (voice/video),…
ERIC Educational Resources Information Center
Phillips, Karen E. S.; Grose-Fifer, Jilliam
2011-01-01
In this study, the authors describe a Performance Enhanced Interactive Learning (PEIL) workshop model as a supplement for organic chemistry instruction. This workshop model differs from many others in that it includes public presentations by students and other whole-class-discussion components that have not been thoroughly investigated in the…
Missing Data Treatments at the Second Level of Hierarchical Linear Models
ERIC Educational Resources Information Center
St. Clair, Suzanne W.
2011-01-01
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…
Technological Effects on Interpersonal Communication: A Classroom Activity.
ERIC Educational Resources Information Center
Vandehaar, Debb
Noting that few scholars have examined specifically how technology is affecting basic communication processes, students in interpersonal, small group, and advanced presentational forms classes studied the systems model of interpersonal communication. The systems model described by P. Emmert and W.C. Donaghy includes the following components:…
An automatic chip structure optical inspection system for electronic components
NASA Astrophysics Data System (ADS)
Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe
2018-01-01
An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.
An apodized Kepler periodogram for separating planetary and stellar activity signals
Gregory, Philip C.
2016-01-01
A new apodized Keplerian (AK) model is proposed for the analysis of precision radial velocity (RV) data to model both planetary and stellar activity (SA) induced RV signals. A symmetrical Gaussian apodization function with unknown width and centre can distinguish planetary signals from SA signals on the basis of the span of the apodization window. The general model for m AK signals includes a linear regression term between RV and the SA diagnostic log (R′hk), as well as an extra Gaussian noise term with unknown standard deviation. The model parameters are explored using a Bayesian fusion Markov chain Monte Carlo code. A differential version of the generalized Lomb–Scargle periodogram that employs a control diagnostic provides an additional way of distinguishing SA signals and helps guide the choice of new periods. Results are reported for a recent international RV blind challenge which included multiple state-of-the-art simulated data sets supported by a variety of SA diagnostics. In the current implementation, the AK method achieved a reduction in SA noise by a factor of approximately 6. Final parameter estimates for the planetary candidates are derived from fits that include AK signals to model the SA components and simple Keplerians to model the planetary candidates. Preliminary results are also reported for AK models augmented by a moving average component that allows for correlations in the residuals. PMID:27346979
Diabetes: Models, Signals and control
NASA Astrophysics Data System (ADS)
Cobelli, C.
2010-07-01
Diabetes and its complications impose significant economic consequences on individuals, families, health systems, and countries. The control of diabetes is an interdisciplinary endeavor, which includes significant components of modeling, signal processing and control. Models: first, I will discuss the minimal (coarse) models which describe the key components of the system functionality and are capable of measuring crucial processes of glucose metabolism and insulin control in health and diabetes; then, the maximal (fine-grain) models which include comprehensively all available knowledge about system functionality and are capable to simulate the glucose-insulin system in diabetes, thus making it possible to create simulation scenarios whereby cost effective experiments can be conducted in silico to assess the efficacy of various treatment strategies - in particular I will focus on the first in silico simulation model accepted by FDA as a substitute to animal trials in the quest for optimal diabetes control. Signals: I will review metabolic monitoring, with a particular emphasis on the new continuous glucose sensors, on the crucial role of models to enhance the interpretation of their time-series signals, and on the opportunities that they present for automation of diabetes control. Control: I will review control strategies that have been successfully employed in vivo or in silico, presenting a promise for the development of a future artificial pancreas and, in particular, I will discuss a modular architecture for building closed-loop control systems, including insulin delivery and patient safety supervision layers.
A classical model for closed-loop diagrams of binary liquid mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnitzler, J.v.; Prausnitz, J.M.
1994-03-01
A classical lattice model for closed-loop temperature-composition phase diagrams has been developed. It considers the effect of specific interactions, such as hydrogen bonding, between dissimilar components. This van Laar-type model includes a Flory-Huggins term for the excess entropy of mixing. It is applied to several liquid-liquid equilibria of nonelectrolytes, where the molecules of the two components differ in size. The model is able to represent the observed data semi-quantitatively, but in most cases it is not flexible enough to predict all parts of the closed loop quantitatively. The ability of the model to represent different binary systems is discussed. Finally,more » attention is given to a correction term, concerning the effect of concentration fluctuations near the upper critical solution temperature.« less
Propulsion System Models for Rotorcraft Conceptual Design
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2014-01-01
The conceptual design code NDARC (NASA Design and Analysis of Rotorcraft) was initially implemented to model conventional rotorcraft propulsion systems, consisting of turboshaft engines burning jet fuel, connected to one or more rotors through a mechanical transmission. The NDARC propulsion system representation has been extended to cover additional propulsion concepts, including electric motors and generators, rotor reaction drive, turbojet and turbofan engines, fuel cells and solar cells, batteries, and fuel (energy) used without weight change. The paper describes these propulsion system components, the architecture of their implementation in NDARC, and the form of the models for performance and weight. Requirements are defined for improved performance and weight models of the new propulsion system components. With these new propulsion models, NDARC can be used to develop environmentally-friendly rotorcraft designs.
Matsuura, Tomoaki; Tanimura, Naoki; Hosoda, Kazufumi; Yomo, Tetsuya; Shimizu, Yoshihiro
2017-01-01
To elucidate the dynamic features of a biologically relevant large-scale reaction network, we constructed a computational model of minimal protein synthesis consisting of 241 components and 968 reactions that synthesize the Met-Gly-Gly (MGG) peptide based on an Escherichia coli-based reconstituted in vitro protein synthesis system. We performed a simulation using parameters collected primarily from the literature and found that the rate of MGG peptide synthesis becomes nearly constant in minutes, thus achieving a steady state similar to experimental observations. In addition, concentration changes to 70% of the components, including intermediates, reached a plateau in a few minutes. However, the concentration change of each component exhibits several temporal plateaus, or a quasi-stationary state (QSS), before reaching the final plateau. To understand these complex dynamics, we focused on whether the components reached a QSS, mapped the arrangement of components in a QSS in the entire reaction network structure, and investigated time-dependent changes. We found that components in a QSS form clusters that grow over time but not in a linear fashion, and that this process involves the collapse and regrowth of clusters before the formation of a final large single cluster. These observations might commonly occur in other large-scale biological reaction networks. This developed analysis might be useful for understanding large-scale biological reactions by visualizing complex dynamics, thereby extracting the characteristics of the reaction network, including phase transitions. PMID:28167777
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2016-05-17
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2015-08-18
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
NASA Technical Reports Server (NTRS)
Bencze, D. P.
1976-01-01
Detailed interference force and pressure data were obtained on a representative wing-body nacelle combination at Mach numbers of 0.9 to 1.4. The model consisted of a delta wing-body aerodynamic force model with four independently supported nacelles located beneath the wing-body combination. The model was mounted on a six component force balance, and the left hand wing was pressure instrumented. Each of the two right hand nacelles was mounted on a six component force balance housed in the thickness of the nacelle, while each of the left hand nacelles was pressure instrumented. The primary variables examined included Mach number, angle of attack, nacelle position, and nacelle mass flow ratio. Nacelle axial location, relative to both the wing-body combination and to each other, was the most important variable in determining the net interference among the components.
Kasper, Joseph M; Lestrange, Patrick J; Stetina, Torin F; Li, Xiaosong
2018-04-10
X-ray absorption spectroscopy is a powerful technique to probe local electronic and nuclear structure. There has been extensive theoretical work modeling K-edge spectra from first principles. However, modeling L-edge spectra directly with density functional theory poses a unique challenge requiring further study. Spin-orbit coupling must be included in the model, and a noncollinear density functional theory is required. Using the real-time exact two-component method, we are able to variationally include one-electron spin-orbit coupling terms when calculating the absorption spectrum. The abilities of different basis sets and density functionals to model spectra for both closed- and open-shell systems are investigated using SiCl 4 and three transition metal complexes, TiCl 4 , CrO 2 Cl 2 , and [FeCl 6 ] 3- . Although we are working in the real-time framework, individual molecular orbital transitions can still be recovered by projecting the density onto the ground state molecular orbital space and separating contributions to the time evolving dipole moment.
GEOS S2S-2_1: GMAO's New High Resolution Seasonal Prediction System
NASA Technical Reports Server (NTRS)
Molod, Andrea; Akella, Santha; Andrews, Lauren; Barahona, Donifan; Borovikov, Anna; Chang, Yehui; Cullather, Richard; Hackert, Eric; Kovach, Robin; Koster, Randal;
2017-01-01
A new version of the modeling and analysis system used to produce sub-seasonal to seasonal forecasts has just been released by the NASA Goddard Global Modeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 12 degree globally), contains a substantially improved model description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilation system has been replaced with a Local Ensemble Transform Kalman Filter. Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will also present results from a free-running coupled simulation with the new system and results from a series of retrospective seasonal forecasts. Results from retrospective forecasts show significant improvements in surface temperatures over much of the northern hemisphere and a much improved prediction of sea ice extent in both hemispheres. The precipitation forecast skill is comparable to previous S2S systems, and the only trade off is an increased double ITCZ, which is expected as we go to higher atmospheric resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. H. Titus, S. Avasaralla, A.Brooks, R. Hatcher
2010-09-22
The National Spherical Torus Experiment (NSTX) project is planning upgrades to the toroidal field, plasma current and pulse length. This involves the replacement of the center-stack, including the inner legs of the TF, OH, and inner PF coils. A second neutral beam will also be added. The increased performance of the upgrade requires qualification of the remaining components including the vessel, passive plates, and divertor for higher disruption loads. The hardware needing qualification is more complex than is typically accessible by large scale electromagnetic (EM) simulations of the plasma disruptions. The usual method is to include simplified representations of componentsmore » in the large EM models and attempt to extract forces to apply to more detailed models. This paper describes a more efficient approach of combining comprehensive modeling of the plasma and tokamak conducting structures, using the 2D OPERA code, with much more detailed treatment of individual components using ANSYS electromagnetic (EM) and mechanical analysis. This capture local eddy currents and resulting loads in complex details, and allows efficient non-linear, and dynamic structural analyses.« less
NASA Astrophysics Data System (ADS)
Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.
2017-01-01
The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.
Object-oriented approach for gas turbine engine simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
Systems engineering interfaces: A model based approach
NASA Astrophysics Data System (ADS)
Fosse, E.; Delp, C. L.
The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.
NDARC-NASA Design and Analysis of Rotorcraft Theoretical Basis and Architecture
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2010-01-01
The theoretical basis and architecture of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are described. The principal tasks of NDARC are to design (or size) a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated. The aircraft attributes are obtained from the sum of the component attributes. NDARC provides a capability to model general rotorcraft configurations, and estimate the performance and attributes of advanced rotor concepts. The software has been implemented with low-fidelity models, typical of the conceptual design environment. Incorporation of higher-fidelity models will be possible, as the architecture of the code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis and optimization.
Muller, Erik B; Nisbet, Roger M
2014-06-01
Ocean acidification is likely to impact the calcification potential of marine organisms. In part due to the covarying nature of the ocean carbonate system components, including pH and CO2 and CO3(2-) levels, it remains largely unclear how each of these components may affect calcification rates quantitatively. We develop a process-based bioenergetic model that explains how several components of the ocean carbonate system collectively affect growth and calcification rates in Emiliania huxleyi, which plays a major role in marine primary production and biogeochemical carbon cycling. The model predicts that under the IPCC A2 emission scenario, its growth and calcification potential will have decreased by the end of the century, although those reductions are relatively modest. We anticipate that our model will be relevant for many other marine calcifying organisms, and that it can be used to improve our understanding of the impact of climate change on marine systems. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.
2016-12-01
Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their interactions using newly developed observational data, models and model outputs, and post-processing tools for making the resulting quantitative storylines most useful in practical hydrology applications.
Fundamental Technology Development for Gas-Turbine Engine Health Management
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Simon, Donald L.; Hunter, Gary W.; Arnold, Steven M.; Reveley, Mary S.; Anderson, Lynn M.
2007-01-01
Integrated vehicle health management technologies promise to dramatically improve the safety of commercial aircraft by reducing system and component failures as causal and contributing factors in aircraft accidents. To realize this promise, fundamental technology development is needed to produce reliable health management components. These components include diagnostic and prognostic algorithms, physics-based and data-driven lifing and failure models, sensors, and a sensor infrastructure including wireless communications, power scavenging, and electronics. In addition, system assessment methods are needed to effectively prioritize development efforts. Development work is needed throughout the vehicle, but particular challenges are presented by the hot, rotating environment of the propulsion system. This presentation describes current work in the field of health management technologies for propulsion systems for commercial aviation.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
Separation of presampling and postsampling modulation transfer functions in infrared sensor systems
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Olson, Jeffrey T.; O'Shea, Patrick D.; Hodgkin, Van A.; Jacobs, Eddie L.
2006-05-01
New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated. These methods are designed to allow the separation and extraction of presampling and postsampling components from the total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques. Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization of sensor performance.
High-order above-threshold ionization beyond the electric dipole approximation
NASA Astrophysics Data System (ADS)
Brennecke, Simon; Lein, Manfred
2018-05-01
Photoelectron momentum distributions from strong-field ionization are calculated by numerical solution of the one-electron time-dependent Schrödinger equation for a model atom including effects beyond the electric dipole approximation. We focus on the high-energy electrons from rescattering and analyze their momentum component along the field propagation direction. We show that the boundary of the calculated momentum distribution is deformed in accordance with the classical three-step model including the beyond-dipole Lorentz force. In addition, the momentum distribution exhibits an asymmetry in the signal strengths of electrons emitted in the forward/backward directions. Taken together, the two non-dipole effects give rise to a considerable average forward momentum component of the order of 0.1 a.u. for realistic laser parameters.
The GPRIME approach to finite element modeling
NASA Technical Reports Server (NTRS)
Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.
1983-01-01
GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.
Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy
NASA Astrophysics Data System (ADS)
He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli
2006-09-01
A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.
The 'overflow tap' theory: linking GPP to forest soil carbon dynamics and the mycorrhizal component
NASA Astrophysics Data System (ADS)
Heinemeyer, Andreas; Willkinson, Matthew; Subke, Jens-Arne; Casella, Eric; Vargas, Rodrigo; Morison, James; Ineson, Phil
2010-05-01
Quantifying soil organic carbon (SOC) dynamics accurately is crucial to underpin better predictions of future climate change feedbacks within the atmosphere-vegetation-soil system. Measuring the components of ecosystem carbon fluxes has become a central point of the research focus during the last decade, not least because of the large SOC stocks, potentially vulnerable to climate change. However, our basic understanding of the composition and environmental responses of the soil CO2 efflux is still under debate and limited by the available field methodologies. For example, only recently did we separate successfully root (R), mycorrhizal fungal (F) and soil animal/microbial (H) respiration based on a mesh-bag/collar methodology and described their unique environmental responses. Yet it might be these differences which are crucial for understanding C-cycle feedbacks and observed limitations in plant biomass increase under elevated carbon dioxide (e.g. FACE) studies. It is becoming clear that these flux components and their environmental responses must be incorporated in models that link but also treat the heterotrophic and autotrophic fluxes separately. However, owing to a scarcity of experimental data, separation of fluxes and environmental drivers has been ignored in current models. We are now in a position to parameterize realistic soil C turnover models that include both, decomposition and plant-derived fluxes. Such models will allow (1) a direct comparison of model output to field data for all flux components, (2) include the potential to link plant C allocation to the rhizosphere with increased decomposition activity through soil C priming, and (3) to explore the potential of plant biomass C sequestration limitations under increased C assimilation. These mechanisms are fundamental in describing the stability of future SOC stocks due to elevated temperatures and carbon dioxide, altering SOC decomposition directly and indirectly through changes in plant productivity. The work presented here focuses on three critical areas: (1) We present annual fluxes at hourly intervals for the three soil CO2 efflux components (R, F and H) from a 75 year-old deciduous oak forest in SE England. We investigate the individual environmental responses of the three flux components, and compare them to soil decomposition modelled by CENTURY and its latest version (i.e. DAYCENT), which separately models root-derived respiration in addition to the soil decomposition output. (2) Using estimates of gross primary productivity (GPP) based on eddy covariance measurements from the same site, we explore linkages between GPP and soil respiration component fluxes using basic regression and wavelet analyses. We show a distinctly different time lag signal between GPP and root vs. mycorrhizal fungal respiration. We then discuss how models might need to be improved to accurately predict total soil CO2 efflux, including root-derived respiration. (3) We finally discuss the ‘overflow tap' theory, that during periods of high assimilation (e.g. optimum environmental conditions or elevated CO2) surplus non-structural C is allocated belowground to the mycorrhizal network; this additional C could then be used and released by the associated fungal partners, causing soil priming through stimulating decomposition.
Faint Object Camera imaging and spectroscopy of NGC 4151
NASA Technical Reports Server (NTRS)
Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.
1995-01-01
We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.
Five challenges for spatial epidemic models
Riley, Steven; Eames, Ken; Isham, Valerie; Mollison, Denis; Trapman, Pieter
2015-01-01
Infectious disease incidence data are increasingly available at the level of the individual and include high-resolution spatial components. Therefore, we are now better able to challenge models that explicitly represent space. Here, we consider five topics within spatial disease dynamics: the construction of network models; characterising threshold behaviour; modelling long-distance interactions; the appropriate scale for interventions; and the representation of population heterogeneity. PMID:25843387
Rocketdyne/Westinghouse nuclear thermal rocket engine modeling
NASA Technical Reports Server (NTRS)
Glass, James F.
1993-01-01
The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.
Space shuttle phase B wind tunnel model and test information. Volume 3: Launch configuration
NASA Technical Reports Server (NTRS)
Glynn, J. L.; Poucher, D. E.
1988-01-01
Archived wind tunnel test data are available for flyback booster or other alternate recoverable configuration as well as reusable orbiters studied during initial development (Phase B) of the Space Shuttle, including contractor data for an extensive variety of configurations with an array of wing and body planforms. The test data have been compiled into a database and are available for application to current winged flyback or recoverable booster aerodynamic studies. The Space Shuttle Phase B Wind Tunnel Database is structured by vehicle component and configuration. Basic components include booster, orbiter, and launch vehicle. Booster configuration types include straight and delta wings, canard, cylindrical, retroglide and twin body. Orbiter configurations include straight and delta wings, lifting body, drop tanks and double delta wings. Launch configurations include booster and orbiter components in various stacked and tandem combinations. The digital database consists of 220 files containing basic tunnel data. Database structure is documented in a series of reports which include configuration sketches for the various planforms tested. This is Volume 3 -- launch configurations.
Anatomy of health care reform proposals.
Soffel, D; Luft, H S
1993-01-01
The current proliferation of proposals for health care reform makes it difficult to sort out the differences among plans and the likely outcome of different approaches to reform. The current health care system has two basic features. The first, enrollment and eligibility functions, includes how people get into the system and gain coverage for health care services. We describe 4 models, ranging from an individual, voluntary approach to a universal, tax-based model. The second, the provision of health care, includes how physician services are organized, how they are paid for, what mechanisms are in place for quality assurance, and the degree of organization and oversight of the health care system. We describe 7 models of the organization component, including the current fee-for-service system with no national health budget, managed care, salaried providers under a budget, and managed competition with and without a national health budget. These 2 components provide the building blocks for health care plans, presented as a matrix. We also evaluate several reform proposals by how they combine these 2 elements. PMID:8273344
Predicting the Magnetic Properties of ICMEs: A Pragmatic View
NASA Astrophysics Data System (ADS)
Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.
2017-12-01
The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.
A system for environmental model coupling and code reuse: The Great Rivers Project
NASA Astrophysics Data System (ADS)
Eckman, B.; Rice, J.; Treinish, L.; Barford, C.
2008-12-01
As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration of parameters (fudge factors) to produce scientifically valid results, we are also developing an autocalibration engine. Finally, visualization is a key element of this modeling framework strategy, both to convey complex scientific data effectively, and also to enable non-expert users to make full use of the relevant features of the framework. We are developing a visualization environment with a strong data model, to enable visualizations, model results, and data all to be handled similarly.
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Bianconi, Roberto; Pirovano, Guido; Matthias, Volker; Vautard, Robert; Moran, Michael D.; Wyat Appel, K.; Bessagnet, Bertrand; Brandt, Jørgen; Christensen, Jesper H.; Chemel, Charles; Coll, Isabelle; Ferreira, Joana; Forkel, Renate; Francis, Xavier V.; Grell, Georg; Grossi, Paola; Hansen, Ayoe B.; Miranda, Ana Isabel; Nopmongcol, Uarporn; Prank, Marje; Sartelet, Karine N.; Schaap, Martijn; Silver, Jeremy D.; Sokhi, Ranjeet S.; Vira, Julius; Werhahn, Johannes; Wolke, Ralf; Yarwood, Greg; Zhang, Junhua; Rao, S. Trivikrama; Galmarini, Stefano
2012-06-01
Ten state-of-the-science regional air quality (AQ) modeling systems have been applied to continental-scale domains in North America and Europe for full-year simulations of 2006 in the context of Air Quality Model Evaluation International Initiative (AQMEII), whose main goals are model inter-comparison and evaluation. Standardised modeling outputs from each group have been shared on the web-distributed ENSEMBLE system, which allows statistical and ensemble analyses to be performed. In this study, the one-year model simulations are inter-compared and evaluated with a large set of observations for ground-level particulate matter (PM10 and PM2.5) and its chemical components. Modeled concentrations of gaseous PM precursors, SO2 and NO2, have also been evaluated against observational data for both continents. Furthermore, modeled deposition (dry and wet) and emissions of several species relevant to PM are also inter-compared. The unprecedented scale of the exercise (two continents, one full year, fifteen modeling groups) allows for a detailed description of AQ model skill and uncertainty with respect to PM. Analyses of PM10 yearly time series and mean diurnal cycle show a large underestimation throughout the year for the AQ models included in AQMEII. The possible causes of PM bias, including errors in the emissions and meteorological inputs (e.g., wind speed and precipitation), and the calculated deposition are investigated. Further analysis of the coarse PM components, PM2.5 and its major components (SO4, NH4, NO3, elemental carbon), have also been performed, and the model performance for each component evaluated against measurements. Finally, the ability of the models to capture high PM concentrations has been evaluated by examining two separate PM2.5 episodes in Europe and North America. A large variability among models in predicting emissions, deposition, and concentration of PM and its precursors during the episodes has been found. Major challenges still remain with regards to identifying and eliminating the sources of PM bias in the models. Although PM2.5 was found to be much better estimated by the models than PM10, no model was found to consistently match the observations for all locations throughout the entire year.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The component model is expressed as operating modes of the component and conditions for transitions between these various modes. Faults are modeled as transitions whose conditions for transitions are unknown (and have to be inferred through the reasoning process). Finally, the behavior of the components is expressed as a set of variables/ parameters and relations governing the interaction between the variables. The hybrid nature of the systems being modeled is captured by a combination of the above transitional model and behavioral model. Stochasticity is captured as probabilities associated with transitions (indicating the likelihood of that transition being taken), as well as noise on the sensed variables.
Defining pharmacy and its practice: a conceptual model for an international audience
Scahill, SL; Atif, M; Babar, ZU
2017-01-01
Background There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. Objectives To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. Methods A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. Results The model is “fit for purpose” across multiple countries and includes two components making up the umbrella term “pharmaceutical practice”. The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the “acts of practice”: teaching, research and professional advocacy; service and academic enterprise. Conclusions This model aims to expose issues relating to defining pharmacy and its practice and to create dialogue. No model is perfect, but there are implications for what is posited in the areas of policy, education and practice and future research. The main point is the need for increased clarity, or at least beginning the discussion to increase the clarity of definition and consistency of meaning in-and-across the pharmacy sector locally, nationally and internationally. PMID:29354558
Defining pharmacy and its practice: a conceptual model for an international audience.
Scahill, S L; Atif, M; Babar, Z U
2017-01-01
There is much fragmentation and little consensus in the use of descriptors for the different disciplines that make up the pharmacy sector. Globalization, reprofessionalization and the influx of other disciplines means there is a requirement for a greater degree of standardization. This has not been well addressed in the pharmacy practice research and education literature. To identify and define the various subdisciplines of the pharmacy sector and integrate them into an internationally relevant conceptual model based on narrative synthesis of the literature. A literature review was undertaken to understand the fragmentation in dialogue surrounding definitions relating to concepts and practices in the context of the pharmacy sector. From a synthesis of this literature, the need for this model was justified. Key assumptions of the model were identified, and an organic process of development took place with the three authors engaging in a process of sense-making to theorize the model. The model is "fit for purpose" across multiple countries and includes two components making up the umbrella term "pharmaceutical practice". The first component is the four conceptual dimensions, which outline the disciplines including social and administrative sciences, community pharmacy, clinical pharmacy and pharmaceutical sciences. The second component of the model describes the "acts of practice": teaching, research and professional advocacy; service and academic enterprise. This model aims to expose issues relating to defining pharmacy and its practice and to create dialogue. No model is perfect, but there are implications for what is posited in the areas of policy, education and practice and future research. The main point is the need for increased clarity, or at least beginning the discussion to increase the clarity of definition and consistency of meaning in-and-across the pharmacy sector locally, nationally and internationally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Diana H.
2013-03-31
The National Risk Assessment Partnership (NRAP) consists of 5 U.S DOE national laboratories collaborating to develop a framework for predicting the risks associated with carbon sequestration. The approach taken by NRAP is to divide the system into components, including injection target reservoirs, wellbores, natural pathways including faults and fractures, groundwater and the atmosphere. Next, develop a detailed, physics and chemistry-based model of each component. Using the results of the detailed models, develop efficient, simplified models, termed reduced order models (ROM) for each component. Finally, integrate the component ROMs into a system model that calculates risk profiles for the site. Thismore » report details the development of the Groundwater Geochemistry ROM for the Edwards Aquifer at PNNL. The Groundwater Geochemistry ROM for the Edwards Aquifer uses a Wellbore Leakage ROM developed at LANL as input. The detailed model, using the STOMP simulator, covers a 5x8 km area of the Edwards Aquifer near San Antonio, Texas. The model includes heterogeneous hydraulic properties, and equilibrium, kinetic and sorption reactions between groundwater, leaked CO2 gas, brine, and the aquifer carbonate and clay minerals. Latin Hypercube sampling was used to generate 1024 samples of input parameters. For each of these input samples, the STOMP simulator was used to predict the flux of CO2 to the atmosphere, and the volume, length and width of the aquifer where pH was less than the MCL standard, and TDS, arsenic, cadmium and lead exceeded MCL standards. In order to decouple the Wellbore Leakage ROM from the Groundwater Geochemistry ROM, the response surface was transformed to replace Wellbore Leakage ROM input parameters with instantaneous and cumulative CO2 and brine leakage rates. The most sensitive parameters proved to be the CO2 and brine leakage rates from the well, with equilibrium coefficients for calcite and dolomite, as well as the number of illite and kaolinite sorption sites proving to be of secondary importance. The Groundwater Geochemistry ROM was developed using nonlinear regression to fit the response surface with a quadratic polynomial. The goodness of fit was excellent for the CO2 flux to the atmosphere, and very good for predicting the volumes of groundwater exceeding the pH, TDS, As, Cd and Pb threshold values.« less
NASA Astrophysics Data System (ADS)
Harper, E. B.; Stella, J. C.; Fremier, A. K.
2009-12-01
Fremont cottonwood (Populus fremontii) is an important component of semi-arid riparian ecosystems throughout western North America, but its populations are in decline due to flow regulation. Achieving a balance between human resource needs and riparian ecosystem function requires a mechanistic understanding of the multiple geomorphic and biological factors affecting tree recruitment and survival, including the timing and magnitude of river flows, and the concomitant influence on suitable habitat creation and mortality from scour and sedimentation burial. Despite a great deal of empirical research on some components of the system, such as factors affecting cottonwood recruitment, other key components are less studied. Yet understanding the relative influence of the full suite of physical and life-history drivers is critical to modeling whole-population dynamics under changing environmental conditions. We addressed these issues for the Fremont cottonwood population along the Sacramento River, CA using a sensitivity analysis approach to quantify uncertainty in parameters on the outcomes of a patch-based, dynamic population model. Using a broad range of plausible values for 15 model parameters that represent key physical, biological and climatic components of the ecosystem, we ran 1,000 population simulations that consisted of a subset of 14.3 million possible combinations of parameter estimates to predict the frequency of patch colonization and total forest habitat predicted to occur under current hydrologic conditions after 175 years. Results indicate that Fremont cottonwood populations are highly sensitive to the interactions among flow regime, sedimentation rate and the depth of the capillary fringe (Fig. 1). Estimates of long-term floodplain sedimentation rate would substantially improve model accuracy. Spatial variation in sediment texture was also important to the extent that it determines the depth of the capillary fringe, which regulates the availability of water for germination and adult tree growth. Our sensitivity analyses suggest that models of future scenarios should incorporate regional climate change projections because changes in temperature and the timing and volume of precipitation affects sensitive aspects of the system, including the timing of seed release and spring snowmelt runoff. Figure 1. The relative effects on model predictions of uncertainty around each parameter included in the patch-based population model for Fremont cottonwood.
Emergence of a Common Modeling Architecture for Earth System Science (Invited)
NASA Astrophysics Data System (ADS)
Deluca, C.
2010-12-01
Common modeling architecture can be viewed as a natural outcome of common modeling infrastructure. The development of model utility and coupling packages (ESMF, MCT, OpenMI, etc.) over the last decade represents the realization of a community vision for common model infrastructure. The adoption of these packages has led to increased technical communication among modeling centers and newly coupled modeling systems. However, adoption has also exposed aspects of interoperability that must be addressed before easy exchange of model components among different groups can be achieved. These aspects include common physical architecture (how a model is divided into components) and model metadata and usage conventions. The National Unified Operational Prediction Capability (NUOPC), an operational weather prediction consortium, is collaborating with weather and climate researchers to define a common model architecture that encompasses these advanced aspects of interoperability and looks to future needs. The nature and structure of the emergent common modeling architecture will be discussed along with its implications for future model development.
A Model for Teaching Information Design
ERIC Educational Resources Information Center
Pettersson, Rune
2011-01-01
The author presents his views on the teaching of information design. The starting point includes some general aspects of teaching and learning. The multidisciplinary structure and content of information design as well as the combined practical and theoretical components influence studies of the discipline. Experiences from working with a model for…
Conceptualizations of Creativity: Comparing Theories and Models of Giftedness
ERIC Educational Resources Information Center
Miller, Angie L.
2012-01-01
This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…
An Evaluation Research Model for System-Wide Textbook Selection.
ERIC Educational Resources Information Center
Talmage, Harriet; Walberg, Herbert T.
One component of an evaluation research model for system-wide selection of curriculum materials is reported: implementation of an evaluation design for obtaining data that permits professional and lay persons to base curriculum materials decisions on a "best fit" principle. The design includes teacher characteristics, learning environment…
Resource Manual for Teacher Training Programs in Economics.
ERIC Educational Resources Information Center
Saunders, Phillip, Ed.; And Others
This resource manual uses a general systems model for educational planning, instruction, and evaluation to describe a college introductory economics course. The goal of the manual is to help beginning or experienced instructors teach more effectively. The model components include needs, goals, objectives, constraints, planning and strategy,…
Bilingual Vocational Training for Health Care Workers: A Guide for Practitioners.
ERIC Educational Resources Information Center
Career Resources Development Center, Inc., San Francisco, CA.
A model for bilingual vocational training of health care workers, designed for immigrants and refugees with limited English skills, is presented. The model's seven components include: recruitment; intake assessment; adapted vocational instruction; Vocational English as a Second Language (VESL); counseling and support services; job development and…
NWTC Helps Guide U.S. Offshore R&D; NREL (National Renewable Energy Laboratory)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-07-01
The National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL) is helping guide our nation's research-and-development effort in offshore renewable energy, which includes: Design, modeling, and analysis tools; Device and component testing; Resource characterization; Economic modeling and analysis; Grid integration.
Museum Accessibility: Combining Audience Research and Staff Training
ERIC Educational Resources Information Center
Levent, Nina; Reich, Christine
2013-01-01
This article discusses an audience-informed professional development model that combines audience research focus groups and staff training that includes interaction and direct feedback from visitors, in this case, visitors with low vision. There are two critical components to this model: one is that museums' programming decisions are informed by…
ERIC Educational Resources Information Center
Briukhanov, V. M.; Kiselev, V. I.; Timchenko, N. S.; Vdovin, V. M.
2010-01-01
The intensive process observed in the past few years, in which higher professional education is coming to be included in the system of market relations, is setting new target guidelines of the activity of institutions of higher learning, as well as the management models of educational institutions. The marketing component is becoming more and more…
The impact of ARM on climate modeling
Randall, David A.; Del Genio, Anthony D.; Donner, Lee J.; ...
2016-07-15
Climate models are among humanity’s most ambitious and elaborate creations. They are designed to simulate the interactions of the atmosphere, ocean, land surface, and cryosphere on time scales far beyond the limits of deterministic predictability and including the effects of time-dependent external forcings. The processes involved include radiative transfer, fluid dynamics, microphysics, and some aspects of geochemistry, biology, and ecology. The models explicitly simulate processes on spatial scales ranging from the circumference of Earth down to 100 km or smaller and implicitly include the effects of processes on even smaller scales down to a micron or so. In addition, themore » atmospheric component of a climate model can be called an atmospheric global circulation model (AGCM).« less
NASA Astrophysics Data System (ADS)
Wichmann, Volker
2017-09-01
The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
NASA Astrophysics Data System (ADS)
Farjoud, Alireza; Taylor, Russell; Schumann, Eric; Schlangen, Timothy
2014-02-01
This paper is focused on modelling, design, and testing of semi-active magneto-rheological (MR) engine and transmission mounts used in the automotive industry. The purpose is to develop a complete analysis, synthesis, design, and tuning tool that reduces the need for expensive and time-consuming laboratory and field tests. A detailed mathematical model of such devices is developed using multi-physics modelling techniques for physical systems with various energy domains. The model includes all major features of an MR mount including fluid dynamics, fluid track, elastic components, decoupler, rate-dip, gas-charged chamber, MR fluid rheology, magnetic circuit, electronic driver, and control algorithm. Conventional passive hydraulic mounts can also be studied using the same mathematical model. The model is validated using standard experimental procedures. It is used for design and parametric study of mounts; effects of various geometric and material parameters on dynamic response of mounts can be studied. Additionally, this model can be used to test various control strategies to obtain best vibration isolation performance by tuning control parameters. Another benefit of this work is that nonlinear interactions between sub-components of the mount can be observed and investigated. This is not possible by using simplified linear models currently available.
Spatio-temporal modelling of rainfall in the Murray-Darling Basin
NASA Astrophysics Data System (ADS)
Nowak, Gen; Welsh, A. H.; O'Neill, T. J.; Feng, Lingbing
2018-02-01
The Murray-Darling Basin (MDB) is a large geographical region in southeastern Australia that contains many rivers and creeks, including Australia's three longest rivers, the Murray, the Murrumbidgee and the Darling. Understanding rainfall patterns in the MDB is very important due to the significant impact major events such as droughts and floods have on agricultural and resource productivity. We propose a model for modelling a set of monthly rainfall data obtained from stations in the MDB and for producing predictions in both the spatial and temporal dimensions. The model is a hierarchical spatio-temporal model fitted to geographical data that utilises both deterministic and data-derived components. Specifically, rainfall data at a given location are modelled as a linear combination of these deterministic and data-derived components. A key advantage of the model is that it is fitted in a step-by-step fashion, enabling appropriate empirical choices to be made at each step.
Multiscale sagebrush rangeland habitat modeling in southwest Wyoming
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Coan, Michael J.; Bowen, Zachary H.
2009-01-01
Sagebrush-steppe ecosystems in North America have experienced dramatic elimination and degradation since European settlement. As a result, sagebrush-steppe dependent species have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, would improve the ability to maintain existing sagebrush habitats. However, current data only identify resource availability locally, with rigorous spatial tools and models that accurately model and map sagebrush habitats over large areas still unavailable. Here we report on an effort to produce a rigorous large-area sagebrush-habitat classification and inventory with statistically validated products and estimates of precision in the State of Wyoming. This research employs a combination of significant new tools, including (1) modeling sagebrush rangeland as a series of independent continuous field components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground-measured plot data on 2.4-meter imagery in the same season the satellite imagery is acquired; (3) effective modeling of ground-measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of an additional two spatial scales of imagery (30 meter and 56 meter) for optimal large-area modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution sensors; and (6) employing rigorous accuracy assessment of model predictions to enable users to understand the inherent uncertainties. First-phase results modeled eight rangeland components (four primary targets and four secondary targets) as continuous field predictions. The primary targets included percent bare ground, percent herbaceousness, percent shrub, and percent litter. The four secondary targets included percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and sagebrush height (centimeters). Results were validated by an independent accuracy assessment with root mean square error (RMSE) values ranging from 6.38 percent for bare ground to 2.99 percent for sagebrush at the QuickBird scale and RMSE values ranging from 12.07 percent for bare ground to 6.34 percent for sagebrush at the full Landsat scale. Subsequent project phases are now in progress, with plans to deliver products that improve accuracies of existing components, model new components, complete models over larger areas, track changes over time (from 1988 to 2007), and ultimately model wildlife population trends against these changes. We believe these results offer significant improvement in sagebrush rangeland quantification at multiple scales and offer users products that have been rigorously validated.
NASA Astrophysics Data System (ADS)
Lu, Guoping; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2008-12-01
The standard dual-component and two-member linear mixing model is often used to quantify water mixing of different sources. However, it is no longer applicable whenever actual mixture concentrations are not exactly known because of dilution. For example, low-water-content (low-porosity) rock samples are leached for pore-water chemical compositions, which therefore are diluted in the leachates. A multicomponent, two-member mixing model of dilution has been developed to quantify mixing of water sources and multiple chemical components experiencing dilution in leaching. This extended mixing model was used to quantify fracture-matrix interaction in construction-water migration tests along the Exploratory Studies Facility (ESF) tunnel at Yucca Mountain, Nevada, USA. The model effectively recovers the spatial distribution of water and chemical compositions released from the construction water, and provides invaluable data on the matrix fracture interaction. The methodology and formulations described here are applicable to many sorts of mixing-dilution problems, including dilution in petroleum reservoirs, hydrospheres, chemical constituents in rocks and minerals, monitoring of drilling fluids, and leaching, as well as to environmental science studies.
Near Critical Preferential Attachment Networks have Small Giant Components
NASA Astrophysics Data System (ADS)
Eckhoff, Maren; Mörters, Peter; Ortgiese, Marcel
2018-05-01
Preferential attachment networks with power law exponent τ >3 are known to exhibit a phase transition. There is a value ρ c>0 such that, for small edge densities ρ ≤ ρ c every component of the graph comprises an asymptotically vanishing proportion of vertices, while for large edge densities ρ >ρ c there is a unique giant component comprising an asymptotically positive proportion of vertices. In this paper we study the decay in the size of the giant component as the critical edge density is approached from above. We show that the size decays very rapidly, like \\exp (-c/ √{ρ -ρ c}) for an explicit constant c>0 depending on the model implementation. This result is in contrast to the behaviour of the class of rank-one models of scale-free networks, including the configuration model, where the decay is polynomial. Our proofs rely on the local neighbourhood approximations of Dereich and Mörters (Ann Probab 41(1):329-384, 2013) and recent progress in the theory of branching random walks (Gantert et al. in Ann Inst Henri Poincaré Probab Stat 47(1):111-129, 2011).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotlin, J.J.; Dunteman, N.R.; Scott, D.I.
1983-01-01
The current Electro-Motive Division 645 Series turbocharged engines are the Model FB and EC. The FB engine combines the highest thermal efficiency with the highest specific output of any EMD engine to date. The FB Series incorporates 16:1 compression ratio with a fire ring piston and an improved turbocharger design. Engine components included in the FB engine provide very high output levels with exceptional reliability. This paper also describes the performance of the lower rated Model EC engine series which feature high thermal efficiency and utilize many engine components well proven in service and basic to the Model FB Series.
Space shuttle phase B wind tunnel model and test information. Volume 2: Orbiter configuration
NASA Technical Reports Server (NTRS)
Glynn, J. L.; Poucher, D. E.
1988-01-01
Archived wind tunnel test data are available for flyback booster or other alternative recoverable configurations as well as reusable orbiters studied during initial development (Phase B) of the Space Shuttle. Considerable wind tunnel data was acquired by the competing contractors and the NASA centers for an extensive variety of configurations with an array of wing and body planforms. All contractor and NASA wind tunnel test data acquired in the Phase B development have been compiled into a data base and are available for applying to current winged flyback or recoverable booster aerodynamic studies. The Space Shuttle Phase B Wind Tunnel Data Base is structured by vehicle component and configuration type. Basic components include the booster, the orbiter, and the launch vehicle. Booster configuration types include straight and delta wings, canard, cylindrical, retro-glide and twin body. Orbiter configuration types include straight and delta wings, lifting body, drop tanks, and double delta wings. Launch configuration types include booster and orbiter components in various stacked and tandem combinations.
Space shuttle phase B wind tunnel model and test information. Volume 3: Launch configuration
NASA Technical Reports Server (NTRS)
Glynn, J. L.; Poucher, D. E.
1988-01-01
Archived wind tunnel data are available for flyback booster or other alternative recoverable configurations as well as reusable orbiters studied during initial development (Phase B) of the Space Shuttle. Considerable wind tunnel data was acquired by the competing contractors and the NASA Centers for an extensive variety of configurations with an array of wing and body planforms. All contractor and NASA wind tunnel data acquired in the Phase B development have been compiled into a data base and are available for application to current winged flyback or recoverable booster aerodynamic studies. The Space Shuttle Phase B Wind Tunnel Database is structured by vehicle component and configuration type. Basic components include booster, orbiter and launch vehicle. Booster configuration types include straight and delta wings, canard, cylindrical, retroglide and twin body. Orbital configuration types include straight and delta wings, lifting body, drop tanks and double delta wings. This is Volume 3 (Part 2) of the report -- Launch Configuration -- which includes booster and orbiter components in various stacked and tandem combinations.
Hierarchical control and performance evaluation of multi-vehicle autonomous systems
NASA Astrophysics Data System (ADS)
Balakirsky, Stephen; Scrapper, Chris; Messina, Elena
2005-05-01
This paper will describe how the Mobility Open Architecture Tools and Simulation (MOAST) framework can facilitate performance evaluations of RCS compliant multi-vehicle autonomous systems. This framework provides an environment that allows for simulated and real architectural components to function seamlessly together. By providing repeatable environmental conditions, this framework allows for the development of individual components as well as component performance metrics. MOAST is composed of high-fidelity and low-fidelity simulation systems, a detailed model of real-world terrain, actual hardware components, a central knowledge repository, and architectural glue to tie all of the components together. This paper will describe the framework"s components in detail and provide an example that illustrates how the framework can be utilized to develop and evaluate a single architectural component through the use of repeatable trials and experimentation that includes both virtual and real components functioning together
Analysis tool and methodology design for electronic vibration stress understanding and prediction
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary
2005-03-01
The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauffer, Philip H.; Levitt, Daniel G.; Miller, Terry Ann
2017-02-09
This report consists of four major sections, including this introductory section. Section 2 provides an overview of previous investigations related to the development of the current sitescale model. The methods and data used to develop the 3-D groundwater model and the techniques used to distill that model into a form suitable for use in the GoldSim models are discussed in Section 3. Section 4 presents the results of the model development effort and discusses some of the uncertainties involved. Three attachments that provide details about the components and data used in this groundwater pathway model are also included with thismore » report.« less
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
NASA Astrophysics Data System (ADS)
Frederiksen, Carsten; Grainger, Simon; Zheng, Xiaogu; Sisson, Janice
2013-04-01
ENSO variability is an important driver of the Southern Hemisphere (SH) atmospheric circulation. Understanding the observed and projected changes in ENSO variability is therefore important to understanding changes in Australian surface climate. Using a recently developed methodology (Zheng et al., 2009), the coherent patterns, or modes, of ENSO-related variability in the SH atmospheric circulation can be separated from modes that are related to intraseasonal variability or to changes in radiative forcings. Under this methodology, the seasonal mean SH 500 hPa geopotential height is considered to consist of three components. These are: (1) an intraseasonal component related to internal dynamics on intraseasonal time scales; (2) a slow-internal component related to internal dynamics on slowly varying (interannual or longer) time scales, including ENSO; and (3) a slow-external component related to external (i.e. radiative) forcings. Empirical Orthogonal Functions (EOFs) are used to represent the modes of variability of the interannual covariance of the three components. An assessment is first made of the modes in models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) dataset for the SH summer and winter seasons in the 20th century. In reanalysis data, two EOFs of the slow component (which includes the slow-internal and slow-external components) have been found to be related to ENSO variability (Frederiksen and Zheng, 2007). In SH summer, the CMIP5 models reproduce the leading ENSO mode very well when the structures of the EOF and the associated SST, and associated variance are considered. There is substantial improvement in this mode when compared with the CMIP3 models shown in Grainger et al. (2012). However, the second ENSO mode in SH summer has a poorly reproduced EOF structure in the CMIP5 models, and the associated variance is generally underestimated. In SH winter, the performance of the CMIP5 models in reproducing the structure and variance is similar for both ENSO modes, with the associated variance being generally underestimated. Projected changes in the modes in the 21st century are then investigated using ensembles of CMIP5 models that reproduce well the 20th century slow modes. The slow-internal and slow-external components are examined separately, allowing the projected changes in the response to ENSO variability to be separated from the response to changes in greenhouse gas concentrations. By using several ensembles, the model-dependency of the projected changes in the ENSO-related slow-internal modes is examined. Frederiksen, C. S., and X. Zheng, 2007: Variability of seasonal-mean fields arising from intraseasonal variability. Part 3: Application to SH winter and summer circulations. Climate Dyn., 28, 849-866. Grainger, S., C. S. Frederiksen, and X. Zheng, 2012: Modes of interannual variability of Southern Hemisphere atmospheric circulation in CMIP3 models: Assessment and Projections. Climate Dyn., in press. Zheng, X., D. M. Straus, C. S. Frederiksen, and S. Grainger, 2009: Potentially predictable patterns of extratropical tropospheric circulation in an ensemble of climate simulations with the COLA AGCM. Quart. J. Roy. Meteor. Soc., 135, 1816-1829.
NASA Astrophysics Data System (ADS)
Barton, N. P.; Metzger, E. J.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T.; Ridout, J. A.; Zamudio, L.; Posey, P.; Reynolds, C. A.; Richman, J. G.; Phelps, M.
2017-12-01
The Naval Research Laboratory is developing an Earth System Model (NESM) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. This system consists of a global atmosphere, ocean, ice, wave, and land prediction models and the individual models include: atmosphere - NAVy Global Environmental Model (NAVGEM); ocean - HYbrid Coordinate Ocean Model (HYCOM); sea ice - Community Ice CodE (CICE); WAVEWATCH III™; and land - NAVGEM Land Surface Model (LSM). Data assimilation is currently loosely coupled between the atmosphere component using a 6-hour update cycle in the Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR) and the ocean/ice components using a 24-hour update cycle in the Navy Coupled Ocean Data Assimilation (NCODA) with 3 hours of incremental updating. This presentation will describe the US Navy's coupled forecast model, the loosely coupled data assimilation, and compare results against stand-alone atmosphere and ocean/ice models. In particular, we will focus on the unique aspects of this modeling system, which includes an eddy resolving ocean model and challenges associated with different update-windows and solvers for the data assimilation in the atmosphere and ocean. Results will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled system is performing with comparable skill to the stand-alone systems.
Update on ORNL TRANSFORM Tool: Simulating Multi-Module Advanced Reactor with End-to-End I&C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, Richard Edward; Fugate, David L.; Cetiner, Sacit M.
2015-05-01
The Small Modular Reactor (SMR) Dynamic System Modeling Tool project is in the fourth year of development. The project is designed to support collaborative modeling and study of various advanced SMR (non-light water cooled reactor) concepts, including the use of multiple coupled reactors at a single site. The focus of this report is the development of a steam generator and drum system model that includes the complex dynamics of typical steam drum systems, the development of instrumentation and controls for the steam generator with drum system model, and the development of multi-reactor module models that reflect the full power reactormore » innovative small module design concept. The objective of the project is to provide a common simulation environment and baseline modeling resources to facilitate rapid development of dynamic advanced reactor models; ensure consistency among research products within the Instrumentation, Controls, and Human-Machine Interface technical area; and leverage cross-cutting capabilities while minimizing duplication of effort. The combined simulation environment and suite of models are identified as the TRANSFORM tool. The critical elements of this effort include (1) defining a standardized, common simulation environment that can be applied throughout the Advanced Reactors Technology program; (2) developing a library of baseline component modules that can be assembled into full plant models using available geometry, design, and thermal-hydraulic data; (3) defining modeling conventions for interconnecting component models; and (4) establishing user interfaces and support tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less
Field, Edward; Milner, Kevin R.; Hardebeck, Jeanne L.; Page, Morgan T.; van der Elst, Nicholas; Jordan, Thomas H.; Michael, Andrew J.; Shaw, Bruce E.; Werner, Maximillan J.
2017-01-01
We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M≥2.5 events, conditioned on any prior M≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as will any subsequent version or competing model, potential usefulness needs to be considered in the context of actual applications.
Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa
2014-01-01
A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or “elements” (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user’s environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs. PMID:24905070
Mohr, David C; Schueller, Stephen M; Montague, Enid; Burns, Michelle Nicole; Rashidi, Parisa
2014-06-05
A growing number of investigators have commented on the lack of models to inform the design of behavioral intervention technologies (BITs). BITs, which include a subset of mHealth and eHealth interventions, employ a broad range of technologies, such as mobile phones, the Web, and sensors, to support users in changing behaviors and cognitions related to health, mental health, and wellness. We propose a model that conceptually defines BITs, from the clinical aim to the technological delivery framework. The BIT model defines both the conceptual and technological architecture of a BIT. Conceptually, a BIT model should answer the questions why, what, how (conceptual and technical), and when. While BITs generally have a larger treatment goal, such goals generally consist of smaller intervention aims (the "why") such as promotion or reduction of specific behaviors, and behavior change strategies (the conceptual "how"), such as education, goal setting, and monitoring. Behavior change strategies are instantiated with specific intervention components or "elements" (the "what"). The characteristics of intervention elements may be further defined or modified (the technical "how") to meet the needs, capabilities, and preferences of a user. Finally, many BITs require specification of a workflow that defines when an intervention component will be delivered. The BIT model includes a technological framework (BIT-Tech) that can integrate and implement the intervention elements, characteristics, and workflow to deliver the entire BIT to users over time. This implementation may be either predefined or include adaptive systems that can tailor the intervention based on data from the user and the user's environment. The BIT model provides a step towards formalizing the translation of developer aims into intervention components, larger treatments, and methods of delivery in a manner that supports research and communication between investigators on how to design, develop, and deploy BITs.
DRAINMOD-GIS: a lumped parameter watershed scale drainage and water quality model
G.P. Fernandez; G.M. Chescheir; R.W. Skaggs; D.M. Amatya
2006-01-01
A watershed scale lumped parameter hydrology and water quality model that includes an uncertainty analysis component was developed and tested on a lower coastal plain watershed in North Carolina. Uncertainty analysis was used to determine the impacts of uncertainty in field and network parameters of the model on the predicted outflows and nitrate-nitrogen loads at the...
GEOS S2S-2_1: The GMAO new high resolution Seasonal Prediction System
NASA Astrophysics Data System (ADS)
Molod, A.; Vikhliaev, Y. V.; Hackert, E. C.; Kovach, R. M.; Zhao, B.; Cullather, R. I.; Marshak, J.; Borovikov, A.; Li, Z.; Barahona, D.; Andrews, L. C.; Chang, Y.; Schubert, S. D.; Koster, R. D.; Suarez, M.; Akella, S.
2017-12-01
A new version of the modeling and analysis system used to produce subseasonalto seasonal forecasts has just been released by the NASA/Goddard GlobalModeling and Assimilation Office. The new version runs at higher atmospheric resolution (approximately 1/2 degree globally), contains a subtantially improvedmodel description of the cryosphere, and includes additional interactive earth system model components (aerosol model). In addition, the Ocean data assimilationsystem has been replaced with a Local Ensemble Transform Kalman Filter.Here will describe the new system, along with the plans for the future (GEOS S2S-3_0) which will include a higher resolution ocean model and more interactive earth system model components (interactive vegetation, biomass burning from fires). We will alsopresent results from a free-running coupled simulation with the new system and resultsfrom a series of retrospective seasonal forecasts.Results from retrospective forecasts show significant improvements in surface temperaturesover much of the northern hemisphere and a much improved prediction of sea ice extent in bothhemispheres. The precipitation forecast skill is comparable to previous S2S systems, andthe only tradeoff is an increased "double ITCZ", which is expected as we go to higher atmospheric resolution.
Real-time simulation of an F110/STOVL turbofan engine
NASA Technical Reports Server (NTRS)
Drummond, Colin K.; Ouzts, Peter J.
1989-01-01
A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.
McStas-model of the delft SESANS
NASA Astrophysics Data System (ADS)
Knudsen, E.; Udby, L.; Willendrup, P. K.; Lefmann, K.; Bouwman, W. G.
2011-06-01
We present simulation results taking first virtual data from a model of the Spin-Echo Small Angle Scattering (SESANS) instrument situated in Delft, in the framework of the McStas Monte Carlo software package. The main focus has been on making a model of the Delft SESANS instrument, and we can now present the first virtual data from it, using a refracting prism-like sample model. In consequence, polarisation instrumentation is now included natively in the McStas kernel, including options for magnetic fields and a number of utility components. This development has brought us to a point where realistic models of polarisation-enabled instrumentation can be built.
Wall, Michael E.; Van Benschoten, Andrew H.; Sauter, Nicholas K.; ...
2014-12-01
X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculationsmore » of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. The decomposition of the MD model into protein and solvent components indicates that protein–solvent interactions contribute substantially to the overall diffuse intensity. In conclusion, diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions.« less
Wall, Michael E.; Van Benschoten, Andrew H.; Sauter, Nicholas K.; Adams, Paul D.; Fraser, James S.; Terwilliger, Thomas C.
2014-01-01
X-ray diffraction from protein crystals includes both sharply peaked Bragg reflections and diffuse intensity between the peaks. The information in Bragg scattering is limited to what is available in the mean electron density. The diffuse scattering arises from correlations in the electron density variations and therefore contains information about collective motions in proteins. Previous studies using molecular-dynamics (MD) simulations to model diffuse scattering have been hindered by insufficient sampling of the conformational ensemble. To overcome this issue, we have performed a 1.1-μs MD simulation of crystalline staphylococcal nuclease, providing 100-fold more sampling than previous studies. This simulation enables reproducible calculations of the diffuse intensity and predicts functionally important motions, including transitions among at least eight metastable states with different active-site geometries. The total diffuse intensity calculated using the MD model is highly correlated with the experimental data. In particular, there is excellent agreement for the isotropic component of the diffuse intensity, and substantial but weaker agreement for the anisotropic component. Decomposition of the MD model into protein and solvent components indicates that protein–solvent interactions contribute substantially to the overall diffuse intensity. We conclude that diffuse scattering can be used to validate predictions from MD simulations and can provide information to improve MD models of protein motions. PMID:25453071
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
NASA Astrophysics Data System (ADS)
Shi, Ming F.; Zhang, Li; Zhu, Xinhai
2016-08-01
The Yoshida nonlinear isotropic/kinematic hardening material model is often selected in forming simulations where an accurate springback prediction is required. Many successful application cases in the industrial scale automotive components using advanced high strength steels (AHSS) have been reported to give better springback predictions. Several issues have been raised recently in the use of the model for higher strength AHSS including the use of two C vs. one C material parameters in the Armstrong and Frederick model (AF model), the original Yoshida model vs. Original Yoshida model with modified hardening law, and constant Young's Modulus vs. decayed Young's Modulus as a function of plastic strain. In this paper, an industrial scale automotive component using 980 MPa strength materials is selected to study the effect of two C and one C material parameters in the AF model on both forming and springback prediction using the Yoshida model with and without the modified hardening law. The effect of decayed Young's Modulus on the springback prediction for AHSS is also evaluated. In addition, the limitations of the material parameters determined from tension and compression tests without multiple cycle tests are also discussed for components undergoing several bending and unbending deformations.
High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics
Carvalho, Carlos M.; Chang, Jeffrey; Lucas, Joseph E.; Nevins, Joseph R.; Wang, Quanli; West, Mike
2010-01-01
We describe studies in molecular profiling and biological pathway analysis that use sparse latent factor and regression models for microarray gene expression data. We discuss breast cancer applications and key aspects of the modeling and computational methodology. Our case studies aim to investigate and characterize heterogeneity of structure related to specific oncogenic pathways, as well as links between aggregate patterns in gene expression profiles and clinical biomarkers. Based on the metaphor of statistically derived “factors” as representing biological “subpathway” structure, we explore the decomposition of fitted sparse factor models into pathway subcomponents and investigate how these components overlay multiple aspects of known biological activity. Our methodology is based on sparsity modeling of multivariate regression, ANOVA, and latent factor models, as well as a class of models that combines all components. Hierarchical sparsity priors address questions of dimension reduction and multiple comparisons, as well as scalability of the methodology. The models include practically relevant non-Gaussian/nonparametric components for latent structure, underlying often quite complex non-Gaussianity in multivariate expression patterns. Model search and fitting are addressed through stochastic simulation and evolutionary stochastic search methods that are exemplified in the oncogenic pathway studies. Supplementary supporting material provides more details of the applications, as well as examples of the use of freely available software tools for implementing the methodology. PMID:21218139
Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte
2017-10-01
The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.
Multi-Body Analysis of a Tiltrotor Configuration
NASA Technical Reports Server (NTRS)
Ghiringhelli, G. L.; Masarati, P.; Mantegazza, P.; Nixon, M. W.
1997-01-01
The paper describes the aeroelastic analysis of a tiltrotor configuration. The 1/5 scale wind tunnel semispan model of the V-22 tiltrotor aircraft is considered. The analysis is performed by means of a multi-body code, based on an original formulation. The differential equilibrium problem is stated in terms of first order differential equations. The equilibrium equations of every rigid body are written, together with the definitions of the momenta. The bodies are connected by kinematic constraints, applied in form of Lagrangian multipliers. Deformable components are mainly modelled by means of beam elements, based on an original finite volume formulation. Multi-disciplinar problems can be solved by adding user-defined differential equations. In the presented analysis the equations related to the control of the swash-plate of the model are considered. Advantages of a multi-body aeroelastic code over existing comprehensive rotorcraft codes include the exact modelling of the kinematics of the hub, the detailed modelling of the flexibility of critical hub components, and the possibility to simulate steady flight conditions as well as wind-up and maneuvers. The simulations described in the paper include: 1) the analysis of the aeroelastic stability, with particular regard to the proprotor/pylon instability that is peculiar to tiltrotors, 2) the determination of the dynamic behavior of the system and of the loads due to typical maneuvers, with particular regard to the conversion from helicopter to airplane mode, and 3) the stress evaluation in critical components, such as the pitch links and the conversion downstop spring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiongwen; Post, Wilfred M; Norby, Richard J
2011-01-01
Soil respiration is an important component of the global carbon cycle and is highly responsive to changes in soil temperature and moisture. Accurate prediction of soil respiration and its changes under future climatic conditions requires a clear understanding of the processes involved. In spite of this, most current empirical soil respiration models incorporate just few of the underlying mechanisms that may influence its response. In this study, a new partial process-based component model built on source components of soil respiration was tested using data collected from a multi-factor climate change experiment that manipulates CO2 concentrations, temperature and precipitation. These resultsmore » were then compared to results generated using several other established models. The component model we tested performed well across different treatments of global climate change. In contrast, some other models, which worked well predicting ambient environmental conditions, were unable to predict the changes under different climate change treatments. Based on the component model, the relative proportions of heterotrophic respiration (Rh) in the total soil respiration at different treatments varied from 0.33 to 0.85. There is a significant increase in the proportion of Rh under the elevated atmospheric CO2 concentration in comparison ambient conditions. The dry treatment resulted in higher proportion of Rh at elevated CO2 and ambient T than under elevated CO2 and elevated T. Also, the ratios between root growth and root maintenance respiration varied across different treatments. Neither increased temperature nor elevated atmospheric CO2 changed Q10 values significantly, while the average Q10 value at wet sites was significantly higher than it at dry sites. There was a higher possibility of increased soil respiration under drying relative to wetting conditions across all treatments based on monthly data, indicating that soil respiration may also be related to soil moisture at previous time periods. Our results reveal that the extent, time delay and contribution of different source components need to be included into mechanistic/processes-based soil respiration models at corresponding scale.« less
NASA Astrophysics Data System (ADS)
Caldararu, S.; Kern, M.; Engel, J.; Zaehle, S.
2016-12-01
Despite recent advances in global vegetation models, we still lack the capacity to predict observed vegetation responses to experimental environmental changes such as elevated CO2, increased temperature or nutrient additions. In particular for elevated CO2 (FACE) experiments, studies have shown that this is related in part to the models' inability to represent plastic changes in nutrient use and biomass allocation. We present a newly developed vegetation model which aims to overcome these problems by including optimality processes to describe nitrogen (N) and carbon allocation within the plant. We represent nitrogen allocation to the canopy and within the canopy between photosynthetic components as an optimal processes which aims to maximize net primary production (NPP) of the plant. We also represent biomass investment into aboveground and belowground components (root nitrogen uptake , biological N fixation) as an optimal process that maximizes plant growth by considering plant carbon and nutrient demands as well as acquisition costs. The model can now represent plastic changes in canopy N content and chlorophyll and Rubisco concentrations as well as in belowground allocation both on seasonal and inter-annual time scales. Specifically, we show that under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry would predicts a quick onset of N limitation. In general, our model aims to include physiologically-based plant processes and avoid arbitrarily imposed parameters and thresholds in order to improve our predictive capability of vegetation responses under changing environmental conditions.
Polyelectrolyte Structure and Interactions in Model Cystic Fibrosis Sputum
NASA Astrophysics Data System (ADS)
Slimmer, Scott; Angelini, Thomas; Liang, Hongjun; Butler, John; Wong, Gerard C. L.
2002-03-01
Cystic fibrosis sputum is a complex fluid consisting of a number of components, including mucin (a glycoprotein), lysozyme (a cationic polypeptide), water, salt, as well as a high concentration of a number of anionic biological polyelectrolytes such as DNA and F-actin. The interactions governing these components are poorly understood, but may have important clinical consequences. For example, the formation of these biological polyelectrolytes into ordered gel phases may contribute significantly to the observed high viscosity of CF sputum. In this work, a number of model systems were created to simulate CF sputum in vitro, in order to elucidate the contributions of the different components. Preliminary results will be presented. This work was supported by NSF DMR-0071761, DOE DEFG02-91ER45439, the Beckman Young Investigator Program, and the Cystic Fibrosis Foundation.
Automatic discovery of the communication network topology for building a supercomputer model
NASA Astrophysics Data System (ADS)
Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim
2016-10-01
The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.
The diffusion decision model: theory and data for two-choice decision tasks.
Ratcliff, Roger; McKoon, Gail
2008-04-01
The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral data-accuracy, mean response times, and response time distributions-into components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.
Merrill, E A; Gearhart, J M; Sterner, T R; Robinson, P J
2008-07-01
n-Decane is considered a major component of various fuels and industrial solvents. These hydrocarbon products are complex mixtures of hundreds of components, including straight-chain alkanes, branched chain alkanes, cycloalkanes, diaromatics, and naphthalenes. Human exposures to the jet fuel, JP-8, or to industrial solvents in vapor, aerosol, and liquid forms all have the potential to produce health effects, including immune suppression and/or neurological deficits. A physiologically based pharmacokinetic (PBPK) model has previously been developed for n-decane, in which partition coefficients (PC), fitted to 4-h exposure kinetic data, were used in preference to measured values. The greatest discrepancy between fitted and measured values was for fat, where PC values were changed from 250-328 (measured) to 25 (fitted). Such a large change in a critical parameter, without any physiological basis, greatly impedes the model's extrapolative abilities, as well as its applicability for assessing the interactions of n-decane or similar alkanes with other compounds in a mixture model. Due to these limitations, the model was revised. Our approach emphasized the use of experimentally determined PCs because many tissues had not approached steady-state concentrations by the end of the 4-h exposures. Diffusion limitation was used to describe n-decane kinetics for the brain, perirenal fat, skin, and liver. Flow limitation was used to describe the remaining rapidly and slowly perfused tissues. As expected from the high lipophilicity of this semivolatile compound (log K(ow) = 5.25), sensitivity analyses showed that parameters describing fat uptake were next to blood:air partitioning and pulmonary ventilation as critical in determining overall systemic circulation and uptake in other tissues. In our revised model, partitioning into fat took multiple days to reach steady state, which differed considerably from the previous model that assumed steady-state conditions in fat at 4 h post dosing with 1200 ppm. Due to these improvements, and particularly the reconciliation between measured and fitted partition coefficients, especially fat, we have greater confidence in using the proposed model for dose, species, and route of exposure extrapolations and as a harmonized model approach for other hydrocarbon components of mixtures.
The lift-fan aircraft: Lessons learned
NASA Technical Reports Server (NTRS)
Deckert, Wallace H.
1995-01-01
This report summarizes the highlights and results of a workshop held at NASA Ames Research Center in October 1992. The objective of the workshop was a thorough review of the lessons learned from past research on lift fans, and lift-fan aircraft, models, designs, and components. The scope included conceptual design studies, wind tunnel investigations, propulsion systems components, piloted simulation, flight of aircraft such as the SV-5A and SV-5B and a recent lift-fan aircraft development project. The report includes a brief summary of five technical presentations that addressed the subject The Lift-Fan Aircraft: Lessons Learned.
NASA Astrophysics Data System (ADS)
Kowalski, A. F.; Hawley, S. L.; Holtzman, J. A.; Wisniewski, J. P.; Hilton, E. J.
2012-03-01
The white light during M dwarf flares has long been known to exhibit the broadband shape of a T≈10 000 K blackbody, and the white light in solar-flares is thought to arise primarily from hydrogen recombination. Yet, a current lack of broad-wavelength coverage solar flare spectra in the optical/near-UV region prohibits a direct comparison of the continuum properties to determine if they are indeed so different. New spectroscopic observations of a secondary flare during the decay of a megaflare on the dM4.5e star YZ CMi have revealed multiple components in the white-light continuum of stellar flares, including both a blackbody-like spectrum and a hydrogen-recombination spectrum. One of the most surprising findings is that these two components are anti-correlated in their temporal evolution. We combine initial phenomenological modeling of the continuum components with spectra from radiative hydrodynamic models to show that continuum veiling causes the measured anti-correlation. This modeling allows us to use the components' inferred properties to predict how a similar spatially resolved, multiple-component, white-light continuum might appear using analogies to several solar-flare phenomena. We also compare the properties of the optical stellar flare white light to Ellerman bombs on the Sun.
Orthology for comparative genomics in the mouse genome database.
Dolan, Mary E; Baldarelli, Richard M; Bello, Susan M; Ni, Li; McAndrews, Monica S; Bult, Carol J; Kadin, James A; Richardson, Joel E; Ringwald, Martin; Eppig, Janan T; Blake, Judith A
2015-08-01
The mouse genome database (MGD) is the model organism database component of the mouse genome informatics system at The Jackson Laboratory. MGD is the international data resource for the laboratory mouse and facilitates the use of mice in the study of human health and disease. Since its beginnings, MGD has included comparative genomics data with a particular focus on human-mouse orthology, an essential component of the use of mouse as a model organism. Over the past 25 years, novel algorithms and addition of orthologs from other model organisms have enriched comparative genomics in MGD data, extending the use of orthology data to support the laboratory mouse as a model of human biology. Here, we describe current comparative data in MGD and review the history and refinement of orthology representation in this resource.
Thresholds for conservation and management: structured decision making as a conceptual framework
Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.
2014-01-01
changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.
Why are you telling me that? A conceptual model of the social function of autobiographical memory.
Alea, Nicole; Bluck, Susan
2003-03-01
In an effort to stimulate and guide empirical work within a functional framework, this paper provides a conceptual model of the social functions of autobiographical memory (AM) across the lifespan. The model delineates the processes and variables involved when AMs are shared to serve social functions. Components of the model include: lifespan contextual influences, the qualitative characteristics of memory (emotionality and level of detail recalled), the speaker's characteristics (age, gender, and personality), the familiarity and similarity of the listener to the speaker, the level of responsiveness during the memory-sharing process, and the nature of the social relationship in which the memory sharing occurs (valence and length of the relationship). These components are shown to influence the type of social function served and/or, the extent to which social functions are served. Directions for future empirical work to substantiate the model and hypotheses derived from the model are provided.
Equilibrium Conformations of Concentric-tube Continuum Robots
Rucker, D. Caleb; Webster, Robert J.; Chirikjian, Gregory S.; Cowan, Noah J.
2013-01-01
Robots consisting of several concentric, preshaped, elastic tubes can work dexterously in narrow, constrained, and/or winding spaces, as are commonly found in minimally invasive surgery. Previous models of these “active cannulas” assume piecewise constant precurvature of component tubes and neglect torsion in curved sections of the device. In this paper we develop a new coordinate-free energy formulation that accounts for general preshaping of an arbitrary number of component tubes, and which explicitly includes both bending and torsion throughout the device. We show that previously reported models are special cases of our formulation, and then explore in detail the implications of torsional flexibility for the special case of two tubes. Experiments demonstrate that this framework is more descriptive of physical prototype behavior than previous models; it reduces model prediction error by 82% over the calibrated bending-only model, and 17% over the calibrated transmissional torsion model in a set of experiments. PMID:25125773
Model of human visual-motion sensing
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1985-01-01
A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
SBML Level 3 package: Groups, Version 1 Release 1
Hucka, Michael; Smith, Lucian P.
2017-01-01
Summary Biological models often contain components that have relationships with each other, or that modelers want to treat as belonging to groups with common characteristics or shared metadata. The SBML Level 3 Version 1 Core specification does not provide an explicit mechanism for expressing such relationships, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Groups package for SBML Level 3 adds the necessary features to SBML to allow grouping of model components to be expressed. Such groups do not affect the mathematical interpretation of a model, but they do provide a way to add information that can be useful for modelers and software tools. The SBML Groups package enables a modeler to include definitions of groups and nested groups, each of which may be annotated to convey why that group was created, and what it represents. PMID:28187406