Sample records for process model results

  1. Dynamic Modeling of Process Technologies for Closed-Loop Water Recovery Systems

    NASA Technical Reports Server (NTRS)

    Allada, Rama Kumar; Lange, Kevin; Anderson, Molly

    2011-01-01

    Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents dynamic simulations of chemical process for primary processor technologies including: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system, the Wiped-Film Rotating Disk (WFRD), and post-distillation water polishing processes such as the Volatiles Removal Assembly (VRA) that were developed using the Aspen Custom Modeler and Aspen Plus process simulation tools. The results expand upon previous work for water recovery technology models and emphasize dynamic process modeling and results. The paper discusses system design, modeling details, and model results for each technology and presents some comparisons between the model results and available test data. Following these initial comparisons, some general conclusions and forward work are discussed.

  2. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-05-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  3. Predictive modeling capabilities from incident powder and laser to mechanical properties for laser directed energy deposition

    NASA Astrophysics Data System (ADS)

    Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda

    2018-01-01

    This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.

  4. Dynamic Modeling of Process Technologies for Closed-Loop Water Recovery Systems

    NASA Technical Reports Server (NTRS)

    Allada, Rama Kumar; Lange, Kevin E.; Anderson, Molly S.

    2012-01-01

    Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents dynamic simulations of chemical process for primary processor technologies including: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system, the Wiped-Film Rotating Disk (WFRD), and post-distillation water polishing processes such as the Volatiles Removal Assembly (VRA). These dynamic models were developed using the Aspen Custom Modeler (Registered TradeMark) and Aspen Plus(Registered TradeMark) process simulation tools. The results expand upon previous work for water recovery technology models and emphasize dynamic process modeling and results. The paper discusses system design, modeling details, and model results for each technology and presents some comparisons between the model results and available test data. Following these initial comparisons, some general conclusions and forward work are discussed.

  5. Orthogonal Gaussian process models

    DOE PAGES

    Plumlee, Matthew; Joseph, V. Roshan

    2017-01-01

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  6. Orthogonal Gaussian process models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plumlee, Matthew; Joseph, V. Roshan

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  7. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine performance tests readily accessible will help advance a more transparent model evaluation process.

  8. The human body metabolism process mathematical simulation based on Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Oliynyk, Andriy; Oliynyk, Eugene; Pyptiuk, Olexandr; DzierŻak, RóŻa; Szatkowska, Małgorzata; Uvaysova, Svetlana; Kozbekova, Ainur

    2017-08-01

    The mathematical model of metabolism process in human organism based on Lotka-Volterra model has beeng proposed, considering healing regime, nutrition system, features of insulin and sugar fragmentation process in the organism. The numerical algorithm of the model using IV-order Runge-Kutta method has been realized. After the result of calculations the conclusions have been made, recommendations about using the modeling results have been showed, the vectors of the following researches are defined.

  9. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  10. Conceptual-level workflow modeling of scientific experiments using NMR as a case study

    PubMed Central

    Verdi, Kacy K; Ellis, Heidi JC; Gryk, Michael R

    2007-01-01

    Background Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. Results We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Conclusion Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment. PMID:17263870

  11. Granularity as a Cognitive Factor in the Effectiveness of Business Process Model Reuse

    NASA Astrophysics Data System (ADS)

    Holschke, Oliver; Rake, Jannis; Levina, Olga

    Reusing design models is an attractive approach in business process modeling as modeling efficiency and quality of design outcomes may be significantly improved. However, reusing conceptual models is not a cost-free effort, but has to be carefully designed. While factors such as psychological anchoring and task-adequacy in reuse-based modeling tasks have been investigated, information granularity as a cognitive concept has not been at the center of empirical research yet. We hypothesize that business process granularity as a factor in design tasks under reuse has a significant impact on the effectiveness of resulting business process models. We test our hypothesis in a comparative study employing high and low granularities. The reusable processes provided were taken from widely accessible reference models for the telecommunication industry (enhanced Telecom Operations Map). First experimental results show that Recall in tasks involving coarser granularity is lower than in cases of finer granularity. These findings suggest that decision makers in business process management should be considerate with regard to the implementation of reuse mechanisms of different granularities. We realize that due to our small sample size results are not statistically significant, but this preliminary run shows that it is ready for running on a larger scale.

  12. A Modified Isotropic-Kinematic Hardening Model to Predict the Defects in Tube Hydroforming Process

    NASA Astrophysics Data System (ADS)

    Jin, Kai; Guo, Qun; Tao, Jie; Guo, Xun-zhong

    2017-11-01

    Numerical simulations of tube hydroforming process of hollow crankshafts were conducted by using finite element analysis method. Moreover, the modified model involving the integration of isotropic-kinematic hardening model with ductile criteria model was used to more accurately optimize the process parameters such as internal pressure, feed distance and friction coefficient. Subsequently, hydroforming experiments were performed based on the simulation results. The comparison between experimental and simulation results indicated that the prediction of tube deformation, crack and wrinkle was quite accurate for the tube hydroforming process. Finally, hollow crankshafts with high thickness uniformity were obtained and the thickness distribution between numerical and experimental results was well consistent.

  13. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  14. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  15. Event-based hydrological modeling for detecting dominant hydrological process and suitable model strategy for semi-arid catchments

    NASA Astrophysics Data System (ADS)

    Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng

    2016-11-01

    To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.

  16. Modelling of peak temperature during friction stir processing of magnesium alloy AZ91

    NASA Astrophysics Data System (ADS)

    Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.

  17. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  18. Modeling microbiological and chemical processes in municipal solid waste bioreactor, Part II: Application of numerical model BIOKEMOD-3P.

    PubMed

    Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh

    2010-02-01

    Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.

  19. Urban Expansion Modeling Approach Based on Multi-Agent System and Cellular Automata

    NASA Astrophysics Data System (ADS)

    Zeng, Y. N.; Yu, M. M.; Li, S. N.

    2018-04-01

    Urban expansion is a land-use change process that transforms non-urban land into urban land. This process results in the loss of natural vegetation and increase in impervious surfaces. Urban expansion also alters the hydrologic cycling, atmospheric circulation, and nutrient cycling processes and generates enormous environmental and social impacts. Urban expansion monitoring and modeling are crucial to understanding urban expansion process, mechanism, and its environmental impacts, and predicting urban expansion in future scenarios. Therefore, it is important to study urban expansion monitoring and modeling approaches. We proposed to simulate urban expansion by combining CA and MAS model. The proposed urban expansion model based on MSA and CA was applied to a case study area of Changsha-Zhuzhou-Xiangtan urban agglomeration, China. The results show that this model can capture urban expansion with good adaptability. The Kappa coefficient of the simulation results is 0.75, which indicated that the combination of MAS and CA offered the better simulation result.

  20. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  1. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    PubMed

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  2. Multi-model comparison on the effects of climate change on tree species in the eastern U.S.: results from an enhanced niche model and process-based ecosystem and landscape models

    Treesearch

    Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston

    2016-01-01

    Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...

  3. Green Pea and Garlic Puree Model Food Development for Thermal Pasteurization Process Quality Evaluation.

    PubMed

    Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang

    2017-07-01

    Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.

  4. Informations in Models of Evolutionary Dynamics

    NASA Astrophysics Data System (ADS)

    Rivoire, Olivier

    2016-03-01

    Biological organisms adapt to changes by processing informations from different sources, most notably from their ancestors and from their environment. We review an approach to quantify these informations by analyzing mathematical models of evolutionary dynamics and show how explicit results are obtained for a solvable subclass of these models. In several limits, the results coincide with those obtained in studies of information processing for communication, gambling or thermodynamics. In the most general case, however, information processing by biological populations shows unique features that motivate the analysis of specific models.

  5. Modeling the curing process of thermosetting resin matrix composites

    NASA Technical Reports Server (NTRS)

    Loos, A. C.

    1986-01-01

    A model is presented for simulating the curing process of a thermosetting resin matrix composite. The model relates the cure temperature, the cure pressure, and the properties of the prepreg to the thermal, chemical, and rheological processes occurring in the composite during cure. The results calculated with the computer code developed on the basis of the model were compared with the experimental data obtained from autoclave-curved composite laminates. Good agreement between the two sets of results was obtained.

  6. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  7. Comparison of complex and parsimonious model structures by means of a modular hydrological model concept

    NASA Astrophysics Data System (ADS)

    Holzmann, Hubert; Massmann, Carolina

    2015-04-01

    A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.

  8. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    NASA Astrophysics Data System (ADS)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-05-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  9. The impact of working memory and the “process of process modelling” on model quality: Investigating experienced versus inexperienced modellers

    PubMed Central

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R.; Weber, Barbara

    2016-01-01

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling. PMID:27157858

  10. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  11. Identification of AR(I)MA processes for modelling temporal correlations of GPS observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.

  12. One-dimensional biomass fast pyrolysis model with reaction kinetics integrated in an Aspen Plus Biorefinery Process Model

    DOE PAGES

    Humbird, David; Trendewicz, Anna; Braun, Robert; ...

    2017-01-12

    A biomass fast pyrolysis reactor model with detailed reaction kinetics and one-dimensional fluid dynamics was implemented in an equation-oriented modeling environment (Aspen Custom Modeler). Portions of this work were detailed in previous publications; further modifications have been made here to improve stability and reduce execution time of the model to make it compatible for use in large process flowsheets. The detailed reactor model was integrated into a larger process simulation in Aspen Plus and was stable for different feedstocks over a range of reactor temperatures. Sample results are presented that indicate general agreement with experimental results, but with higher gasmore » losses caused by stripping of the bio-oil by the fluidizing gas in the simulated absorber/condenser. Lastly, this integrated modeling approach can be extended to other well-defined, predictive reactor models for fast pyrolysis, catalytic fast pyrolysis, as well as other processes.« less

  13. One-dimensional biomass fast pyrolysis model with reaction kinetics integrated in an Aspen Plus Biorefinery Process Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbird, David; Trendewicz, Anna; Braun, Robert

    A biomass fast pyrolysis reactor model with detailed reaction kinetics and one-dimensional fluid dynamics was implemented in an equation-oriented modeling environment (Aspen Custom Modeler). Portions of this work were detailed in previous publications; further modifications have been made here to improve stability and reduce execution time of the model to make it compatible for use in large process flowsheets. The detailed reactor model was integrated into a larger process simulation in Aspen Plus and was stable for different feedstocks over a range of reactor temperatures. Sample results are presented that indicate general agreement with experimental results, but with higher gasmore » losses caused by stripping of the bio-oil by the fluidizing gas in the simulated absorber/condenser. Lastly, this integrated modeling approach can be extended to other well-defined, predictive reactor models for fast pyrolysis, catalytic fast pyrolysis, as well as other processes.« less

  14. Conceptual-level workflow modeling of scientific experiments using NMR as a case study.

    PubMed

    Verdi, Kacy K; Ellis, Heidi Jc; Gryk, Michael R

    2007-01-30

    Scientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration. We propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy. Using the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.

  15. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  16. Simulation Modeling of Software Development Processes

    NASA Technical Reports Server (NTRS)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  17. Probabilistic modeling of the fate of Listeria monocytogenes in diced bacon during the manufacturing process.

    PubMed

    Billoir, Elise; Denis, Jean-Baptiste; Cammeau, Natalie; Cornu, Marie; Zuliani, Veronique

    2011-02-01

    To assess the impact of the manufacturing process on the fate of Listeria monocytogenes, we built a generic probabilistic model intended to simulate the successive steps in the process. Contamination evolution was modeled in the appropriate units (breasts, dice, and then packaging units through the successive steps in the process). To calibrate the model, parameter values were estimated from industrial data, from the literature, and based on expert opinion. By means of simulations, the model was explored using a baseline calibration and alternative scenarios, in order to assess the impact of changes in the process and of accidental events. The results are reported as contamination distributions and as the probability that the product will be acceptable with regards to the European regulatory safety criterion. Our results are consistent with data provided by industrial partners and highlight that tumbling is a key step for the distribution of the contamination at the end of the process. Process chain models could provide an important added value for risk assessment models that basically consider only the outputs of the process in their risk mitigation strategies. Moreover, a model calibrated to correspond to a specific plant could be used to optimize surveillance. © 2010 Society for Risk Analysis.

  18. [The dual process model of addiction. Towards an integrated model?].

    PubMed

    Vandermeeren, R; Hebbrecht, M

    2012-01-01

    Neurobiology and cognitive psychology have provided us with a dual process model of addiction. According to this model, behavior is considered to be the dynamic result of a combination of automatic and controlling processes. In cases of addiction the balance between these two processes is severely disturbed. Automated processes will continue to produce impulses that ensure the continuance of addictive behavior. Weak, reflective or controlling processes are both the reason for and the result of the inability to forgo addiction. To identify features that are common to current neurocognitive insights into addiction and psychodynamic views on addiction. The picture that emerges from research is not clear. There is some evidence that attentional bias has a causal effect on addiction. There is no evidence that automatic associations have a causal effect, but there is some evidence that automatic action-tendencies do have a causal effect. Current neurocognitive views on the dual process model of addiction can be integrated with an evidence-based approach to addiction and with psychodynamic views on addiction.

  19. Process dissociation and mixture signal detection theory.

    PubMed

    DeCarlo, Lawrence T

    2008-11-01

    The process dissociation procedure was developed in an attempt to separate different processes involved in memory tasks. The procedure naturally lends itself to a formulation within a class of mixture signal detection models. The dual process model is shown to be a special case. The mixture signal detection model is applied to data from a widely analyzed study. The results suggest that a process other than recollection may be involved in the process dissociation procedure.

  20. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  1. New Computer Simulation Procedure of Heading Face Mining Process with Transverse Cutting Heads for Roadheader Automation

    NASA Astrophysics Data System (ADS)

    Dolipski, Marian; Cheluszka, Piotr; Sobota, Piotr; Remiorz, Eryk

    2017-03-01

    The key working process carried out by roadheaders is rock mining. For this reason, the mathematical modelling of the mining process is underlying the prediction of a dynamic load on the main components of a roadheader, the prediction of power demand for rock cutting with given properties or the prediction of energy consumption of this process. The theoretical and experimental investigations conducted point out - especially in relation to the technical parameters of roadheaders used these days in underground mining and their operating conditions - that the mathematical models of the process employed to date have many limitations, and in many cases the results obtained using such models deviate largely from the reality. This is due to the fact that certain factors strongly influencing cutting process progress have not been considered at the modelling stage, or have been approached in an oversimplified fashion. The article presents a new model of a rock cutting process using conical picks of cutting heads of boom-type roadheaders. An important novelty with respect to the models applied to date is, firstly, that the actual shape of cuts has been modelled with such shape resulting from the geometry of the currently used conical picks, and, secondly, variations in the depth of cuts in the cutting path of individual picks have been considered with such variations resulting from the picks' kinematics during the advancement of transverse cutting heads parallel to the floor surface. The work presents examples of simulation results for mining with a roadheader's transverse head equipped with 80 conical picks and compares them with the outcomes obtained using the existing model.

  2. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  3. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  4. Dynamics Modelling of Biolistic Gene Guns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, M.; Tao, W.; Pianetta, P.A.

    2009-06-04

    The gene transfer process using biolistic gene guns is a highly dynamic process. To achieve good performance, the process needs to be well understood and controlled. Unfortunately, no dynamic model is available in the open literature for analysing and controlling the process. This paper proposes such a model. Relationships of the penetration depth with the helium pressure, the penetration depth with the acceleration distance, and the penetration depth with the micro-carrier radius are presented. Simulations have also been conducted. The results agree well with experimental results in the open literature. The contribution of this paper includes a dynamic model formore » improving and manipulating performance of the biolistic gene gun.« less

  5. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    NASA Astrophysics Data System (ADS)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  6. Mathematical modeling of the heat transfer during pyrolysis process used for end-of-life tires treatment

    NASA Astrophysics Data System (ADS)

    Zheleva, I.; Georgiev, I.; Filipova, M.; Menseidov, D.

    2017-10-01

    Mathematical modeling of the heat transfer during the pyrolysis process used for the treatment of the End-of-Lifetires (EOLT) is presented in this paper. The pyrolysis process is 3D and non-stationary and because of this it is very complicated for modeling and studying. To simplify the modeling here a hierarchy of 2D models for the temperature which describe the non-stationary heat transfer in such a pyrolysis station is created. An algorithm for solving the model equations, based on MATLAB software is developed. The results for the temperature for some characteristic periods of operation of pyrolysis station are presented and commented in the paper. The results from this modeling can be used in the real pyrolysis station for more precise displacement of measurement devices and for designing of automated management of the process.

  7. [Monitoring method for macroporous resin column chromatography process of salvianolic acids based on near infrared spectroscopy].

    PubMed

    Hou, Xiang-Mei; Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang

    2016-07-01

    To study and establish a monitoring method for macroporous resin column chromatography process of salvianolic acids by using near infrared spectroscopy (NIR) as a process analytical technology (PAT).The multivariate statistical process control (MSPC) model was developed based on 7 normal operation batches, and 2 test batches (including one normal operation batch and one abnormal operation batch) were used to verify the monitoring performance of this model. The results showed that MSPC model had a good monitoring ability for the column chromatography process. Meanwhile, NIR quantitative calibration model was established for three key quality indexes (rosmarinic acid, lithospermic acid and salvianolic acid B) by using partial least squares (PLS) algorithm. The verification results demonstrated that this model had satisfactory prediction performance. The combined application of the above two models could effectively achieve real-time monitoring for macroporous resin column chromatography process of salvianolic acids, and can be used to conduct on-line analysis of key quality indexes. This established process monitoring method could provide reference for the development of process analytical technology for traditional Chinese medicines manufacturing. Copyright© by the Chinese Pharmaceutical Association.

  8. The Context, Process, and Outcome Evaluation Model for Organisational Health Interventions

    PubMed Central

    Fridrich, Annemarie; Jenny, Gregor J.; Bauer, Georg F.

    2015-01-01

    To facilitate evaluation of complex, organisational health interventions (OHIs), this paper aims at developing a context, process, and outcome (CPO) evaluation model. It builds on previous model developments in the field and advances them by clearly defining and relating generic evaluation categories for OHIs. Context is defined as the underlying frame that influences and is influenced by an OHI. It is further differentiated into the omnibus and discrete contexts. Process is differentiated into the implementation process, as the time-limited enactment of the original intervention plan, and the change process of individual and collective dynamics triggered by the implementation process. These processes lead to proximate, intermediate, and distal outcomes, as all results of the change process that are meaningful for various stakeholders. Research questions that might guide the evaluation of an OHI according to the CPO categories and a list of concrete themes/indicators and methods/sources applied within the evaluation of an OHI project at a hospital in Switzerland illustrate the model's applicability in structuring evaluations of complex OHIs. In conclusion, the model supplies a common language and a shared mental model for improving communication between researchers and company members and will improve the comparability and aggregation of evaluation study results. PMID:26557665

  9. The Context, Process, and Outcome Evaluation Model for Organisational Health Interventions.

    PubMed

    Fridrich, Annemarie; Jenny, Gregor J; Bauer, Georg F

    2015-01-01

    To facilitate evaluation of complex, organisational health interventions (OHIs), this paper aims at developing a context, process, and outcome (CPO) evaluation model. It builds on previous model developments in the field and advances them by clearly defining and relating generic evaluation categories for OHIs. Context is defined as the underlying frame that influences and is influenced by an OHI. It is further differentiated into the omnibus and discrete contexts. Process is differentiated into the implementation process, as the time-limited enactment of the original intervention plan, and the change process of individual and collective dynamics triggered by the implementation process. These processes lead to proximate, intermediate, and distal outcomes, as all results of the change process that are meaningful for various stakeholders. Research questions that might guide the evaluation of an OHI according to the CPO categories and a list of concrete themes/indicators and methods/sources applied within the evaluation of an OHI project at a hospital in Switzerland illustrate the model's applicability in structuring evaluations of complex OHIs. In conclusion, the model supplies a common language and a shared mental model for improving communication between researchers and company members and will improve the comparability and aggregation of evaluation study results.

  10. Experimental research on mathematical modelling and unconventional control of clinker kiln in cement plants

    NASA Astrophysics Data System (ADS)

    Rusu-Anghel, S.

    2017-01-01

    Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.

  11. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    PubMed

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.

  12. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  13. A constitutive model and numerical simulation of sintering processes at macroscopic level

    NASA Astrophysics Data System (ADS)

    Wawrzyk, Krzysztof; Kowalczyk, Piotr; Nosewicz, Szymon; Rojek, Jerzy

    2018-01-01

    This paper presents modelling of both single and double-phase powder sintering processes at the macroscopic level. In particular, its constitutive formulation, numerical implementation and numerical tests are described. The macroscopic constitutive model is based on the assumption that the sintered material is a continuous medium. The parameters of the constitutive model for material under sintering are determined by simulation of sintering at the microscopic level using a micro-scale model. Numerical tests were carried out for a cylindrical specimen under hydrostatic and uniaxial pressure. Results of macroscopic analysis are compared against the microscopic model results. Moreover, numerical simulations are validated by comparison with experimental results. The simulations and preparation of the model are carried out by Abaqus FEA - a software for finite element analysis and computer-aided engineering. A mechanical model is defined by the user procedure "Vumat" which is developed by the first author in Fortran programming language. Modelling presented in the paper can be used to optimize and to better understand the process.

  14. How to build a course in mathematical-biological modeling: content and processes for knowledge and skill.

    PubMed

    Hoskinson, Anne-Marie

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical-biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments.

  15. How to Build a Course in Mathematical–Biological Modeling: Content and Processes for Knowledge and Skill

    PubMed Central

    2010-01-01

    Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical–biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments. PMID:20810966

  16. A Framework for Distributed Problem Solving

    NASA Astrophysics Data System (ADS)

    Leone, Joseph; Shin, Don G.

    1989-03-01

    This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.

  17. Modelling of influential parameters on a continuous evaporation process by Doehlert shells

    PubMed Central

    Porte, Catherine; Havet, Jean-Louis; Daguet, David

    2003-01-01

    The modelling of the parameters that influence the continuous evaporation of an alcoholic extract was considered using Doehlert matrices. The work was performed with a wiped falling film evaporator that allowed us to study the influence of the pressure, temperature, feed flow and dry matter of the feed solution on the dry matter contents of the resulting concentrate, and the productivity of the process. The Doehlert shells were used to model the influential parameters. The pattern obtained from the experimental results was checked allowing for some dysfunction in the unit. The evaporator was modified and a new model applied; the experimental results were then in agreement with the equations. The model was finally determined and successfully checked in order to obtain an 8% dry matter concentrate with the best productivity; the results fit in with the industrial constraints of subsequent processes. PMID:18924887

  18. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    NASA Astrophysics Data System (ADS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  19. C. botulinum inactivation kinetics implemented in a computational model of a high-pressure sterilization process.

    PubMed

    Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis

    2009-01-01

    High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.

  20. Model prototype utilization in the analysis of fault tolerant control and data processing systems

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Tsarev, R. Yu; Gruzenkin, D. V.; Prokopenko, A. V.; Knyazkov, A. N.; Laptenok, V. D.

    2016-04-01

    The procedure assessing the profit of control and data processing system implementation is presented in the paper. The reasonability of model prototype creation and analysis results from the implementing of the approach of fault tolerance provision through the inclusion of structural and software assessment redundancy. The developed procedure allows finding the best ratio between the development cost and the analysis of model prototype and earnings from the results of this utilization and information produced. The suggested approach has been illustrated by the model example of profit assessment and analysis of control and data processing system.

  1. Cirrus clouds. I - A cirrus cloud model. II - Numerical experiments on the formation and maintenance of cirrus

    NASA Technical Reports Server (NTRS)

    Starr, D. OC.; Cox, S. K.

    1985-01-01

    A simplified cirrus cloud model is presented which may be used to investigate the role of various physical processes in the life cycle of a cirrus cloud. The model is a two-dimensional, time-dependent, Eulerian numerical model where the focus is on cloud-scale processes. Parametrizations are developed to account for phase changes of water, radiative processes, and the effects of microphysical structure on the vertical flux of ice water. The results of a simulation of a thin cirrostratus cloud are given. The results of numerical experiments performed with the model are described in order to demonstrate the important role of cloud-scale processes in determining the cloud properties maintained in response to larger scale forcing. The effects of microphysical composition and radiative processes are considered, as well as their interaction with thermodynamic and dynamic processes within the cloud. It is shown that cirrus clouds operate in an entirely different manner than liquid phase stratiform clouds.

  2. Models of recognition: a review of arguments in favor of a dual-process account.

    PubMed

    Diana, Rachel A; Reder, Lynne M; Arndt, Jason; Park, Heekyeong

    2006-02-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models.

  3. A process-based model for cattle manure compost windrows: Model performance and application

    USDA-ARS?s Scientific Manuscript database

    A model was developed and incorporated in the Integrated Farm System Model (IFSM, v.4.3) that simulates important processes occurring during windrow composting of manure. The model, documented in an accompanying paper, predicts changes in windrow properties and conditions and the resulting emissions...

  4. Modeling of an integrated fermentation/membrane extraction process for the production of 2-phenylethanol and 2-phenylethylacetate.

    PubMed

    Adler, Philipp; Hugen, Thorsten; Wiewiora, Marzena; Kunz, Benno

    2011-03-07

    An unstructured model for an integrated fermentation/membrane extraction process for the production of the aroma compounds 2-phenylethanol and 2-phenylethylacetate by Kluyveromyces marxianus CBS 600 was developed. The extent to which this model, based only on data from the conventional fermentation and separation processes, provided an estimation of the integrated process was evaluated. The effect of product inhibition on specific growth rate and on biomass yield by both aroma compounds was approximated by multivariate regression. Simulations of the respective submodels for fermentation and the separation process matched well with experimental results. With respect to the in situ product removal (ISPR) process, the effect of reduced product inhibition due to product removal on specific growth rate and biomass yield was predicted adequately by the model simulations. Overall product yields were increased considerably in this process (4.0 g/L 2-PE+2-PEA vs. 1.4 g/L in conventional fermentation) and were even higher than predicted by the model. To describe the effect of product concentration on product formation itself, the model was extended using results from the conventional and the ISPR process, thus agreement between model and experimental data improved notably. Therefore, this model can be a useful tool for the development and optimization of an efficient integrated bioprocess. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Evaluation of shrinking core model in leaching process of Pomalaa nickel laterite using citric acid as leachant at atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Wanta, K. C.; Perdana, I.; Petrus, H. T. B. M.

    2016-11-01

    Most of kinetics studies related to leaching process used shrinking core model to describe physical phenomena of the process. Generally, the model was developed in connection with transport and/or reaction of reactant components. In this study, commonly used internal diffusion controlled shrinking core model was evaluated for leaching process of Pomalaa nickel laterite using citric acid as leachant. Particle size was varied at 60-70, 100-120, -200 meshes, while the operating temperature was kept constant at 358 K, citric acid concentration at 0.1 M, pulp density at 20% w/v and the leaching time was for 120 minutes. Simulation results showed that the shrinking core model was inadequate to closely approach the experimental data. Meanwhile, the experimental data indicated that the leaching process was determined by the mobility of product molecules in the ash layer pores. In case of leaching resulting large product molecules, a mathematical model involving steps of reaction and product diffusion might be appropriate to develop.

  6. Managing the Drafting Process: Creating a New Model for the Workplace.

    ERIC Educational Resources Information Center

    Shwom, Barbara L.; Hirsch, Penny L.

    1994-01-01

    Discusses the development of a pragmatic model of the writing process in the workplace, focusing on the importance of "drafting" as part of that process. Discusses writers' attitudes about drafting and the structures of the workplace that drafting has to accommodate. Introduces a drafting model and discusses results of using this model…

  7. Investigating the Representational Fluency of Pre-Service Mathematics Teachers in a Modelling Process

    ERIC Educational Resources Information Center

    Delice, Ali; Kertil, Mahmut

    2015-01-01

    This article reports the results of a study that investigated pre-service mathematics teachers' modelling processes in terms of representational fluency in a modelling activity related to a cassette player. A qualitative approach was used in the data collection process. Students' individual and group written responses to the mathematical modelling…

  8. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    PubMed

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  9. A Mathematical Model for the Middle Ear Ventilation

    NASA Astrophysics Data System (ADS)

    Molnárka, G.; Miletics, E. M.; Fücsek, M.

    2008-09-01

    The otitis media is one of the mostly existing illness for the children, therefore investigation of the human middle ear ventilation is an actual problem. In earlier investigations both experimental and theoretical approach one can find in ([l]-[3]). Here we give a new mathematical and computer model to simulate this ventilation process. This model able to describe the diffusion and flow processes simultaneously, therefore it gives more precise results than earlier models did. The article contains the mathematical model and some results of the simulation.

  10. Process Optimization of Dual-Laser Beam Welding of Advanced Al-Li Alloys Through Hot Cracking Susceptibility Modeling

    NASA Astrophysics Data System (ADS)

    Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra

    2016-07-01

    Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.

  11. Multiscale Computational Design Optimization of Copper-Strengthened Steel for High Cycle Fatigue

    DTIC Science & Technology

    2010-03-19

    strain energy) and (3) modeling of a slip band (of PSB ladder underlying structure) and attendant crack initiation process. 15. SUBJECT TERMS 16...energy). (C) A modeling of a slip band (of PSB ladder underlying structure) and attendant crack initiation process. Major results obtained are...differentiate the morphology from others, e.g., vein and planar structures of dislocations. Results and Discussion for (C) (C-1) Modeling PSB For modeling

  12. Modeling of the HiPco process for carbon nanotube production. I. Chemical kinetics

    NASA Technical Reports Server (NTRS)

    Dateo, Christopher E.; Gokcen, Tahir; Meyyappan, M.

    2002-01-01

    A chemical kinetic model is developed to help understand and optimize the production of single-walled carbon nanotubes via the high-pressure carbon monoxide (HiPco) process, which employs iron pentacarbonyl as the catalyst precursor and carbon monoxide as the carbon feedstock. The model separates the HiPco process into three steps, precursor decomposition, catalyst growth and evaporation, and carbon nanotube production resulting from the catalyst-enhanced disproportionation of carbon monoxide, known as the Boudouard reaction: 2 CO(g)-->C(s) + CO2(g). The resulting detailed model contains 971 species and 1948 chemical reactions. A second model with a reduced reaction set containing 14 species and 22 chemical reactions is developed on the basis of the detailed model and reproduces the chemistry of the major species. Results showing the parametric dependence of temperature, total pressure, and initial precursor partial pressures are presented, with comparison between the two models. The reduced model is more amenable to coupled reacting flow-field simulations, presented in the following article.

  13. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  14. An analysis of the Petri net based model of the human body iron homeostasis process.

    PubMed

    Sackmann, Andrea; Formanowicz, Dorota; Formanowicz, Piotr; Koch, Ina; Blazewicz, Jacek

    2007-02-01

    In the paper a Petri net based model of the human body iron homeostasis is presented and analyzed. The body iron homeostasis is an important but not fully understood complex process. The modeling of the process presented in the paper is expressed in the language of Petri net theory. An application of this theory to the description of biological processes allows for very precise analysis of the resulting models. Here, such an analysis of the body iron homeostasis model from a mathematical point of view is given.

  15. Data from fitting Gaussian process models to various data sets using eight Gaussian process software packages.

    PubMed

    Erickson, Collin B; Ankenman, Bruce E; Sanchez, Susan M

    2018-06-01

    This data article provides the summary data from tests comparing various Gaussian process software packages. Each spreadsheet represents a single function or type of function using a particular input sample size. In each spreadsheet, a row gives the results for a particular replication using a single package. Within each spreadsheet there are the results from eight Gaussian process model-fitting packages on five replicates of the surface. There is also one spreadsheet comparing the results from two packages performing stochastic kriging. These data enable comparisons between the packages to determine which package will give users the best results.

  16. [Study on the dynamic model with supercritical CO2 fluid extracting the lipophilic components in Panax notoginseng].

    PubMed

    Duan, Xian-Chun; Wang, Yong-Zhong; Zhang, Jun-Ru; Luo, Huan; Zhang, Heng; Xia, Lun-Zhu

    2011-08-01

    To establish a dynamics model for extracting the lipophilic components in Panax notoginseng with supercritical carbon dioxide (CO2). Based on the theory of counter-flow mass transfer and the molecular mass transfer between the material and the supercritical CO2 fluid under differential mass-conservation equation, a dynamics model was established and computed to compare forecasting result with the experiment process. A dynamics model has been established for supercritical CO2 to extract the lipophilic components in Panax notoginseng, the computed result of this model was consistent with the experiment process basically. The supercritical fluid extract dynamics model established in this research can expound the mechanism in the extract process of which lipophilic components of Panax notoginseng dissolve the mass transfer and is tallied with the actual extract process. This provides certain instruction for the supercritical CO2 fluid extract' s industrialization enlargement.

  17. Intercomparison and interpretation of climate feedback processes in 19 atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Potter, G. L.; Blanchet, J. P.; Boer, G. J.; Del Genio, A. D.

    1990-01-01

    The present study provides an intercomparison and interpretation of climate feedback processes in 19 atmospheric general circulation models. This intercomparison uses sea surface temperature change as a surrogate for climate change. The interpretation of cloud-climate interactions is given special attention. A roughly threefold variation in one measure of global climate sensitivity is found among the 19 models. The important conclusion is that most of this variation is attributable to differences in the models' depiction of cloud feedback, a result that emphasizes the need for improvements in the treatment of clouds in these models if they are ultimately to be used as reliable climate predictors. It is further emphazied that cloud feedback is the consequence of all interacting physical and dynamical processes in a general circulation model. The result of these processes is to produce changes in temperature, moisture distribution, and clouds which are integrated into the radiative response termed cloud feedback.

  18. A modified dynamical model of drying process of polymer blend solution coated on a flat substrate

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2008-05-01

    We have proposed and modified a model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication. And for example numerical simulation of the model reproduces a typical thickness profile of the polymer film formed after drying. Then we have clarified dependence of distribution of polymer molecules on a flat substrate on a various parameters based on analysis of numerical simulations. Then we drove nonlinear equations of drying process from the dynamical model and the fruits were reported. The subject of above studies was limited to solution having one kind of solute though the model could essentially deal with solution having some kinds of solutes. But nowadays discussion of drying process of a solution having some kinds of solutes is needed because drying process of solution having some kinds of solutes appears in many industrial scenes. Polymer blend solution is one instance. And typical resist consists of a few kinds of polymers. Then we introduced a dynamical model of drying process of polymer blend solution coated on a flat substrate and results of numerical simulations of the dynamical model. But above model was the simplest one. In this study, we modify above dynamical model of drying process of polymer blend solution adding effects that some parameters change with time as functions of some variables to it. Then we consider essence of drying process of polymer blend solution through comparison between results of numerical simulations of the modified model and those of the former model.

  19. Simulation of Triple Oxidation Ditch Wastewater Treatment Process

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Zhang, Jinsong; Liu, Lixiang; Hu, Yongfeng; Xu, Ziming

    2010-11-01

    This paper presented the modeling mechanism and method of a sewage treatment system. A triple oxidation ditch process of a WWTP was simulated based on activated sludge model ASM2D with GPS-X software. In order to identify the adequate model structure to be implemented into the GPS-X environment, the oxidation ditch was divided into several completely stirred tank reactors depended on the distribution of aeration devices and dissolved oxygen concentration. The removal efficiency of COD, ammonia nitrogen, total nitrogen, total phosphorus and SS were simulated by GPS-X software with influent quality data of this WWTP from June to August 2009, to investigate the differences between the simulated results and the actual results. The results showed that, the simulated values could well reflect the actual condition of the triple oxidation ditch process. Mathematical modeling method was appropriate in effluent quality predicting and process optimizing.

  20. Goddard Cumulus Ensemble (GCE) Model: Application for Understanding Precipitation Processes

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2002-01-01

    One of the most promising methods to test the representation of cloud processes used in climate models is to use observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and realistic representations of cloud microphysical processes, and they can reasonably well resolve the time evolution, structure, and life cycles of clouds and cloud systems (size about 2-200 km). The CRMs also allow explicit interaction between out-going longwave (cooling) and incoming solar (heating) radiation with clouds. Observations can provide the initial conditions and validation for CRM results. The Goddard Cumulus Ensemble (GCE) Model, a cloud-resolving model, has been developed and improved at NASA/Goddard Space Flight Center over the past two decades. Dr. Joanne Simpson played a central role in GCE modeling developments and applications. She was the lead author or co-author on more than forty GCE modeling papers. In this paper, a brief discussion and review of the application of the GCE model to (1) cloud interactions and mergers, (2) convective and stratiform interaction, (3) mechanisms of cloud-radiation interaction, (4) latent heating profiles and TRMM, and (5) responses of cloud systems to large-scale processes are provided. Comparisons between the GCE model's results, other cloud-resolving model results and observations are also examined.

  1. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  2. Conceptual and logical level of database modeling

    NASA Astrophysics Data System (ADS)

    Hunka, Frantisek; Matula, Jiri

    2016-06-01

    Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.

  3. A Modeling Approach to Fiber Fracture in Melt Impregnation

    NASA Astrophysics Data System (ADS)

    Ren, Feng; Zhang, Cong; Yu, Yang; Xin, Chunling; Tang, Ke; He, Yadong

    2017-02-01

    The effect of process variables such as roving pulling speed, melt temperature and number of pins on the fiber fracture during the processing of thermoplastic based composites was investigated in this study. The melt impregnation was used in this process of continuous glass fiber reinforced thermoplastic composites. Previous investigators have suggested a variety of models for melt impregnation, while comparatively little effort has been spent on modeling the fiber fracture caused by the viscous resin. Herein, a mathematical model was developed for impregnation process to predict the fiber fracture rate and describe the experimental results with the Weibull intensity distribution function. The optimal parameters of this process were obtained by orthogonal experiment. The results suggest that the fiber fracture is caused by viscous shear stress on fiber bundle in melt impregnation mold when pulling the fiber bundle.

  4. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  5. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  6. Business process modeling in healthcare.

    PubMed

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  7. Blasting Damage Predictions by Numerical Modeling in Siahbishe Pumped Storage Powerhouse

    NASA Astrophysics Data System (ADS)

    Eslami, Majid; Goshtasbi, Kamran

    2018-04-01

    One of the popular methods of underground and surface excavations is the use of blasting. Throughout this method of excavation, the loading resulted from blasting can be affected by different geo-mechanical and structural parameters of rock mass. Several factors affect turbulence in underground structures some of which are explosion, vibration, and stress impulses caused by the neighbouring blasting products. In investigating the blasting mechanism one should address the processes which expand with time and cause seismic events. To protect the adjoining structures against any probable deconstruction or damage, it is very important to model the blasting process prior to any actual operation. Efforts have been taken in the present study to demonstrate the potentiality of numerical methods in predicting the specified parameters in order to prevent any probable destruction. For this purpose the blasting process was modeled, according to its natural implementation, in one of the tunnels of Siahbishe dam by the 3DEC and AUTODYN 3D codes. 3DEC was used for modeling the blasting environment as well as the blast holes and AUTODYN 3D for modeling the explosion process in the blast hole. In this process the output of AUTODYN 3D, which is a result of modeling the blast hole and is in the form of stress waves, is entered into 3DEC. For analyzing the amount of destruction made by the blasting operation, the key parameter of Peak Particle Velocity was used. In the end, the numerical modeling results have been compared with the data recorded by the seismographs planted through the tunnel. As the results indicated 3DEC and AUTODYN 3D proved appropriate for analyzing such an issue. Therefore, by means of these two softwares one can analyze explosion processes prior to their implementation and make close estimation of the damage resulting from these processes.

  8. Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.

    PubMed

    Stephens, Rachel G; Dunn, John C; Hayes, Brett K

    2018-03-01

    Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Modeling of Protein Subcellular Localization in Bacteria

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohua; Kulkarni, Rahul

    2006-03-01

    Specific subcellular localization of proteins is a vital component of important bacterial processes: e.g. the Min proteins which regulate cell division in E. coli and Spo0J-Soj system which is critical for sporulation in B. subtilis. We examine how the processes of diffusion and membrane attachment contribute to protein subcellular localization for the above systems. We use previous experimental results to suggest minimal models for these processes. For the minimal models, we derive analytic expressions which provide insight into the processes that determine protein subcellular localization. Finally, we present the results of numerical simulations for the systems studied and make connections to the observed experiemental phenomenology.

  10. Commentary on the Integration of Model Sharing and Reproducibility Analysis to Scholarly Publishing Workflow in Computational Biomechanics

    PubMed Central

    Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.

    2016-01-01

    Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567

  11. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  12. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  13. Theoretical results on fractionally integrated exponential generalized autoregressive conditional heteroskedastic processes

    NASA Astrophysics Data System (ADS)

    Lopes, Sílvia R. C.; Prass, Taiane S.

    2014-05-01

    Here we present a theoretical study on the main properties of Fractionally Integrated Exponential Generalized Autoregressive Conditional Heteroskedastic (FIEGARCH) processes. We analyze the conditions for the existence, the invertibility, the stationarity and the ergodicity of these processes. We prove that, if { is a FIEGARCH(p,d,q) process then, under mild conditions, { is an ARFIMA(q,d,0) with correlated innovations, that is, an autoregressive fractionally integrated moving average process. The convergence order for the polynomial coefficients that describes the volatility is presented and results related to the spectral representation and to the covariance structure of both processes { and { are discussed. Expressions for the kurtosis and the asymmetry measures for any stationary FIEGARCH(p,d,q) process are also derived. The h-step ahead forecast for the processes {, { and { are given with their respective mean square error of forecast. The work also presents a Monte Carlo simulation study showing how to generate, estimate and forecast based on six different FIEGARCH models. The forecasting performance of six models belonging to the class of autoregressive conditional heteroskedastic models (namely, ARCH-type models) and radial basis models is compared through an empirical application to Brazilian stock market exchange index.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  15. Comparative lifecycle assessment of alternatives for waste management in Rio de Janeiro - Investigating the influence of an attributional or consequential approach.

    PubMed

    Bernstad Saraiva, A; Souza, R G; Valle, R A B

    2017-10-01

    The environmental impacts from three management alternatives for organic fraction of municipal solid waste were compared using lifecycle assessment methodology. The alternatives (sanitary landfill, selective collection of organic waste for anaerobic digestion and anaerobic digestion after post-separation of organic waste) were modelled applying an attributional as well as consequential approach, in parallel with the aim of identifying if and how these approaches can affect results and conclusions. The marginal processes identified in the consequential modelling were in general associated with higher environmental impacts than average processes modelled with an attributional approach. As all investigated waste management alternatives result in net-substitution of energy and in some cases also materials, the consequential modelling resulted in lower absolute environmental impacts in five of the seven environmental impact categories assessed in the study. In three of these, the chosen modelling approach can alter the hierarchy between compared waste management alternatives. This indicates a risk of underestimating potential benefits from efficient energy recovery from waste when applying attributional modelling in contexts in which electricity provision historically has been dominated by technologies presenting rather low environmental impacts, but where projections point at increasing impacts from electricity provision in coming years. Thus, in the present case study, the chosen approach affects both absolute and relative results from the comparison. However, results were largely related to the processes identified as affected by investigated changes, and not merely the chosen modelling approach. The processes actually affected by future choices between different waste management alternatives are intrinsically uncertain. The study demonstrates the benefits of applying different assumptions regarding the processes affected by investigated choices - both for provision of energy and materials substituted by waste management processes in consequential LCA modelling, in order to present outcomes that are relevant as decision support within the waste management sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Mathematical model of silicon smelting process basing on pelletized charge from technogenic raw materials

    NASA Astrophysics Data System (ADS)

    Nemchinova, N. V.; Tyutrin, A. A.; Salov, V. M.

    2018-03-01

    The silicon production process in the electric arc reduction furnaces (EAF) is studied using pelletized charge as an additive to the standard on the basis of the generated mathematical model. The results obtained due to the model will contribute to the analysis of the charge components behavior during melting with the achievement of optimum final parameters of the silicon production process. The authors proposed using technogenic waste as a raw material for the silicon production in a pelletized form using liquid glass and aluminum production dust from the electrostatic precipitators as a binder. The method of mathematical modeling with the help of the ‘Selector’ software package was used as a basis for the theoretical study. A model was simulated with the imitation of four furnace temperature zones and a crystalline silicon phase (25 °C). The main advantage of the created model is the ability to analyze the behavior of all burden materials (including pelletized charge) in the carbothermic process. The behavior analysis is based on the thermodynamic probability data of the burden materials interactions in the carbothermic process. The model accounts for 17 elements entering the furnace with raw materials, electrodes and air. The silicon melt, obtained by the modeling, contained 91.73 % wt. of the target product. The simulation results showed that in the use of the proposed combined charge, the recovery of silicon reached 69.248 %, which is in good agreement with practical data. The results of the crystalline silicon chemical composition modeling are compared with the real silicon samples of chemical analysis data, which showed the results of convergence. The efficiency of the mathematical modeling methods in the studying of the carbothermal silicon obtaining process with complex interphase transformations and the formation of numerous intermediate compounds using a pelletized charge as an additive to the traditional one is shown.

  17. Shared Processing of Language and Music.

    PubMed

    Atherton, Ryan P; Chrobak, Quin M; Rauscher, Frances H; Karst, Aaron T; Hanson, Matt D; Steinert, Steven W; Bowe, Kyra L

    2018-01-01

    The present study sought to explore whether musical information is processed by the phonological loop component of the working memory model of immediate memory. Original instantiations of this model primarily focused on the processing of linguistic information. However, the model was less clear about how acoustic information lacking phonological qualities is actively processed. Although previous research has generally supported shared processing of phonological and musical information, these studies were limited as a result of a number of methodological concerns (e.g., the use of simple tones as musical stimuli). In order to further investigate this issue, an auditory interference task was employed. Specifically, participants heard an initial stimulus (musical or linguistic) followed by an intervening stimulus (musical, linguistic, or silence) and were then asked to indicate whether a final test stimulus was the same as or different from the initial stimulus. Results indicated that mismatched interference conditions (i.e., musical - linguistic; linguistic - musical) resulted in greater interference than silence conditions, with matched interference conditions producing the greatest interference. Overall, these results suggest that processing of linguistic and musical information draws on at least some of the same cognitive resources.

  18. A functional–structural model of rice linking quantitative genetic information with morphological development and physiological processes

    PubMed Central

    Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard

    2011-01-01

    Background and Aims Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype–phenotype model, we present here a three-dimensional functional–structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. Methods The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Key Results Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. Conclusions We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed. PMID:21247905

  19. Using Dispersed Modes During Model Correlation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Hathcock, Megan L.

    2017-01-01

    The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.

  20. Measuring health care process quality with software quality measures.

    PubMed

    Yildiz, Ozkan; Demirörs, Onur

    2012-01-01

    Existing quality models focus on some specific diseases, clinics or clinical areas. Although they contain structure, process, or output type measures, there is no model which measures quality of health care processes comprehensively. In addition, due to the not measured overall process quality, hospitals cannot compare quality of processes internally and externally. To bring a solution to above problems, a new model is developed from software quality measures. We have adopted the ISO/IEC 9126 software quality standard for health care processes. Then, JCIAS (Joint Commission International Accreditation Standards for Hospitals) measurable elements were added to model scope for unifying functional requirements. Assessment (diagnosing) process measurement results are provided in this paper. After the application, it was concluded that the model determines weak and strong aspects of the processes, gives a more detailed picture for the process quality, and provides quantifiable information to hospitals to compare their processes with multiple organizations.

  1. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  2. High-speed blanking of copper alloy sheets: Material modeling and simulation

    NASA Astrophysics Data System (ADS)

    Husson, Ch.; Ahzi, S.; Daridon, L.

    2006-08-01

    To optimize the blanking process of thin copper sheets ( ≈ 1. mm thickness), it is necessary to study the influence of the process parameters such as the punch-die clearance and the wear of the punch and the die. For high stroke rates, the strain rate developed in the work-piece can be very high. Therefore, the material modeling must include the dynamic effects.For the modeling part, we propose an elastic-viscoplastic material model combined with a non-linear isotropic damage evolution law based on the theory of the continuum damage mechanics. Our proposed modeling is valid for a wide range of strain rates and temperatures. Finite Element simulations, using the commercial code ABAQUS/Explicit, of the blanking process are then conducted and the results are compared to the experimental investigations. The predicted cut edge of the blanked part and the punch-force displacement curves are discussed as function of the process parameters. The evolution of the shape errors (roll-over depth, fracture depth, shearing depth, and burr formation) as function of the punch-die clearance, the punch and the die wear, and the contact punch/die/blank-holder are presented. A discussion on the different stages of the blanking process as function of the processing parameters is given. The predicted results of the blanking dependence on strain-rate and temperature using our modeling are presented (for the plasticity and damage). The comparison our model results with the experimental ones shows a good agreement.

  3. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less

  4. Models of recognition: A review of arguments in favor of a dual-process account

    PubMed Central

    DIANA, RACHEL A.; REDER, LYNNE M.; ARNDT, JASON; PARK, HEEKYEONG

    2008-01-01

    The majority of computationally specified models of recognition memory have been based on a single-process interpretation, claiming that familiarity is the only influence on recognition. There is increasing evidence that recognition is, in fact, based on two processes: recollection and familiarity. This article reviews the current state of the evidence for dual-process models, including the usefulness of the remember/know paradigm, and interprets the relevant results in terms of the source of activation confusion (SAC) model of memory. We argue that the evidence from each of the areas we discuss, when combined, presents a strong case that inclusion of a recollection process is necessary. Given this conclusion, we also argue that the dual-process claim that the recollection process is always available is, in fact, more parsimonious than the single-process claim that the recollection process is used only in certain paradigms. The value of a well-specified process model such as the SAC model is discussed with regard to other types of dual-process models. PMID:16724763

  5. Successes and Challenges in Linking Observations and Modeling of Marine and Terrestrial Cryospheric Processes

    NASA Astrophysics Data System (ADS)

    Herzfeld, U. C.; Hunke, E. C.; Trantow, T.; Greve, R.; McDonald, B.; Wallin, B.

    2014-12-01

    Understanding of the state of the cryosphere and its relationship to other components of the Earth system requires both models of geophysical processes and observations of geophysical properties and processes, however linking observations and models is far from trivial. This paper looks at examples from sea ice and land ice model-observation linkages to examine some approaches, challenges and solutions. In a sea-ice example, ice deformation is analyzed as a key process that indicates fundamental changes in the Arctic sea ice cover. Simulation results from the Los Alamos Sea-Ice Model CICE, which is also the sea-ice component of the Community Earth System Model (CESM), are compared to parameters indicative of deformation as derived from mathematical analysis of remote sensing data. Data include altimeter, micro-ASAR and image data from manned and unmanned aircraft campaigns (NASA OIB and Characterization of Arctic Sea Ice Experiment, CASIE). The key problem to linking data and model results is the derivation of matching parameters on both the model and observation side.For terrestrial glaciology, we include an example of a surge process in a glacier system and and example of a dynamic ice sheet model for Greenland. To investigate the surge of the Bering Bagley Glacier System, we use numerical forward modeling experiments and, on the data analysis side, a connectionist approach to analyze crevasse provinces. In the Greenland ice sheet example, we look at the influence of ice surface and bed topography, as derived from remote sensing data, on on results from a dynamic ice sheet model.

  6. Diffusion models of the flanker task: Discrete versus gradual attentional selection

    PubMed Central

    White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.

    2011-01-01

    The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663

  7. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  8. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    PubMed

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Modeling and characterization of supercapacitors for wireless sensor network applications

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Yang, Hengzhao

    A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.

  10. Identifying Hydrologic Processes in Agricultural Watersheds Using Precipitation-Runoff Models

    USGS Publications Warehouse

    Linard, Joshua I.; Wolock, David M.; Webb, Richard M.T.; Wieczorek, Michael

    2009-01-01

    Understanding the fate and transport of agricultural chemicals applied to agricultural fields will assist in designing the most effective strategies to prevent water-quality impairments. At a watershed scale, the processes controlling the fate and transport of agricultural chemicals are generally understood only conceptually. To examine the applicability of conceptual models to the processes actually occurring, two precipitation-runoff models - the Soil and Water Assessment Tool (SWAT) and the Water, Energy, and Biogeochemical Model (WEBMOD) - were applied in different agricultural settings of the contiguous United States. Each model, through different physical processes, simulated the transport of water to a stream from the surface, the unsaturated zone, and the saturated zone. Models were calibrated for watersheds in Maryland, Indiana, and Nebraska. The calibrated sets of input parameters for each model at each watershed are discussed, and the criteria used to validate the models are explained. The SWAT and WEBMOD model results at each watershed conformed to each other and to the processes identified in each watershed's conceptual hydrology. In Maryland the conceptual understanding of the hydrology indicated groundwater flow was the largest annual source of streamflow; the simulation results for the validation period confirm this. The dominant source of water to the Indiana watershed was thought to be tile drains. Although tile drains were not explicitly simulated in the SWAT model, a large component of streamflow was received from lateral flow, which could be attributed to tile drains. Being able to explicitly account for tile drains, WEBMOD indicated water from tile drains constituted most of the annual streamflow in the Indiana watershed. The Nebraska models indicated annual streamflow was composed primarily of perennial groundwater flow and infiltration-excess runoff, which conformed to the conceptual hydrology developed for that watershed. The hydrologic processes represented in the parameter sets resulting from each model were comparable at individual watersheds, but varied between watersheds. The models were unable to show, however, whether hydrologic processes other than those included in the original conceptual models were major contributors to streamflow. Supplemental simulations of agricultural chemical transport could improve the ability to assess conceptual models.

  11. Verification of ARMA identification for modelling temporal correlation of GPS observations using the toolbox ARMASA

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoguang; Mayer, Michael; Heck, Bernhard

    2010-05-01

    One essential deficiency of the stochastic model used in many GNSS (Global Navigation Satellite Systems) software products consists in neglecting temporal correlation of GNSS observations. Analysing appropriately detrended time series of observation residuals resulting from GPS (Global Positioning System) data processing, the temporal correlation behaviour of GPS observations can be sufficiently described by means of so-called autoregressive moving average (ARMA) processes. Using the toolbox ARMASA which is available free of charge in MATLAB® Central (open exchange platform for the MATLAB® and SIMULINK® user community), a well-fitting time series model can be identified automatically in three steps. Firstly, AR, MA, and ARMA models are computed up to some user-specified maximum order. Subsequently, for each model type, the best-fitting model is selected using the combined (for AR processes) resp. generalised (for MA and ARMA processes) information criterion. The final model identification among the best-fitting AR, MA, and ARMA models is performed based on the minimum prediction error characterising the discrepancies between the given data and the fitted model. The ARMA coefficients are computed using Burg's maximum entropy algorithm (for AR processes), Durbin's first (for MA processes) and second (for ARMA processes) methods, respectively. This paper verifies the performance of the automated ARMA identification using the toolbox ARMASA. For this purpose, a representative data base is generated by means of ARMA simulation with respect to sample size, correlation level, and model complexity. The model error defined as a transform of the prediction error is used as measure for the deviation between the true and the estimated model. The results of the study show that the recognition rates of underlying true processes increase with increasing sample sizes and decrease with rising model complexity. Considering large sample sizes, the true underlying processes can be correctly recognised for nearly 80% of the analysed data sets. Additionally, the model errors of first-order AR resp. MA processes converge clearly more rapidly to the corresponding asymptotical values than those of high-order ARMA processes.

  12. Assessment of effectiveness of geologic isolation systems. Geologic-simulation model for a hypothetical site in the Columbia Plateau. Volume 2: results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Petrie, G.M.; Baldwin, A.J.

    1982-06-01

    This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less

  13. Representative Model of the Learning Process in Virtual Spaces Supported by ICT

    ERIC Educational Resources Information Center

    Capacho, José

    2014-01-01

    This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…

  14. Composite faces are not (necessarily) processed coactively: A test using systems factorial technology and logical-rule models.

    PubMed

    Cheng, Xue Jun; McCarthy, Callum J; Wang, Tony S L; Palmeri, Thomas J; Little, Daniel R

    2018-06-01

    Upright faces are thought to be processed more holistically than inverted faces. In the widely used composite face paradigm, holistic processing is inferred from interference in recognition performance from a to-be-ignored face half for upright and aligned faces compared with inverted or misaligned faces. We sought to characterize the nature of holistic processing in composite faces in computational terms. We use logical-rule models (Fifić, Little, & Nosofsky, 2010) and Systems Factorial Technology (Townsend & Nozawa, 1995) to examine whether composite faces are processed through pooling top and bottom face halves into a single processing channel-coactive processing-which is one common mechanistic definition of holistic processing. By specifically operationalizing holistic processing as the pooling of features into a single decision process in our task, we are able to distinguish it from other processing models that may underlie composite face processing. For instance, a failure of selective attention might result even when top and bottom components of composite faces are processed in serial or in parallel without processing the entire face coactively. Our results show that performance is best explained by a mixture of serial and parallel processing architectures across all 4 upright and inverted, aligned and misaligned face conditions. The results indicate multichannel, featural processing of composite faces in a manner inconsistent with the notion of coactivity. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Process based modeling of total longshore sediment transport

    USGS Publications Warehouse

    Haas, K.A.; Hanes, D.M.

    2004-01-01

    Waves, currents, and longshore sand transport are calculated locally as a function of position in the nearshore region using process based numerical models. The resultant longshore sand transport is then integrated across the nearshore to provide predictions of the total longshore transport of sand due to waves and longshore currents. Model results are in close agreement with the I1-P1 correlation described by Komar and Inman (1970) and the CERC (1984) formula. Model results also indicate that the proportionality constant in the I1-P1 formula depends weakly upon the sediment size, the shape of the beach profile, and the particular local sediment flux formula that is employed. Model results indicate that the various effects and influences of sediment size tend to cancel out, resulting in little overall dependence on sediment size.

  16. Expanding the Political Philosophy Dimension of the RISP Model: Examining the Conditional Indirect Effects of Cultural Cognition.

    PubMed

    Hmielowski, Jay D; Wang, Meredith Y; Donaway, Rebecca R

    2018-04-25

    This article attempts to connect literatures from the Risk Information Seeking and Processing (RISP) model and cultural cognition theory. We do this by assessing the relationship between the two prominent cultural cognition variables (i.e., group and grid) and risk perceptions. We then examine whether these risk perceptions are associated with three outcomes important to the RISP model: information seeking, systematic processing, and heuristic processing, through a serial mediation model. We used 2015 data collected from 10 communities across the United States to test our hypotheses. Our results show that people high on group and low on grid (egalitarian communitarians) show greater risk perceptions regarding water quality issues. Moreover, these higher levels of perceived risk translate into increased information seeking, systematic processing of information, and lower heuristic processing through intervening variables from the RISP model (e.g., negative emotions and information insufficiency). These results extend the extant literature by expanding on the treatment of political ideology within the RISP model literature and taking a more nuanced approach to political beliefs in accordance with the cultural cognitions literature. Our article also expands on the RISP literature by looking at information-processing variables. © 2018 Society for Risk Analysis.

  17. Modeling of thermal processes arising during shaping gears with internal non-involute teeth

    NASA Astrophysics Data System (ADS)

    Kanatnikov, N. V.; Kanatnikova, P. A.; Vlasov, V. V.; Pashmentova, A. S.

    2018-03-01

    The paper presents a model for predicting the thermal processes arising during shaping gears with internal non-involute teeth. The kinematics of cutting is modeled due to the analytical model. Chipping is modeled using the finite element method. The experiment is based on the method of infrared photography of the cutting process. The simulation results showed that the maximum temperatures and heat flows in the tool vary by more than 10% when the rake and clearance angels of the cutting are changed.

  18. Modelling coupled microbial processes in the subsurface: Model development, verification, evaluation and application

    NASA Astrophysics Data System (ADS)

    Masum, Shakil A.; Thomas, Hywel R.

    2018-06-01

    To study subsurface microbial processes, a coupled model which has been developed within a Thermal-Hydraulic-Chemical-Mechanical (THCM) framework is presented. The work presented here, focuses on microbial transport, growth and decay mechanisms under the influence of multiphase flow and bio-geochemical reactions. In this paper, theoretical formulations and numerical implementations of the microbial model are presented. The model has been verified and also evaluated against relevant experimental results. Simulated results show that the microbial processes have been accurately implemented and their impacts on porous media properties can be predicted either qualitatively or quantitatively or both. The model has been applied to investigate biofilm growth in a sandstone core that is subjected to a two-phase flow and variable pH conditions. The results indicate that biofilm growth (if not limited by substrates) in a multiphase system largely depends on the hydraulic properties of the medium. When the change in porewater pH which occurred due to dissolution of carbon dioxide gas is considered, growth processes are affected. For the given parameter regime, it has been shown that the net biofilm growth is favoured by higher pH; whilst the processes are considerably retarded at lower pH values. The capabilities of the model to predict microbial respiration in a fully coupled multiphase flow condition and microbial fermentation leading to production of a gas phase are also demonstrated.

  19. A process proof test for model concepts: Modelling the meso-scale

    NASA Astrophysics Data System (ADS)

    Hellebrand, Hugo; Müller, Christoph; Matgen, Patrick; Fenicia, Fabrizio; Savenije, Huub

    In hydrological modelling the use of detailed soil data is sometimes troublesome, since often these data are hard to obtain and, if available at all, difficult to interpret and process in a way that makes them meaningful for the model at hand. Intuitively the understanding and mapping of dominant runoff processes in the soil show high potential for improving hydrological models. In this study a labour-intensive methodology to assess dominant runoff processes is simplified in such a way that detailed soil maps are no longer needed. Nonetheless, there is an ongoing debate on how to integrate this type of information in hydrological models. In this study, dominant runoff processes (DRP) are mapped for meso-scale basins using the permeability of the substratum, land use information and the slope in a GIS. During a field campaign the processes are validated and for each DRP assumptions are made concerning their water storage capacity. The latter is done by means of combining soil data obtained during the field campaign with soil data obtained from the literature. Second, several parsimoniously parameterized conceptual hydrological models are used that incorporate certain aspects of the DRP. The result of these models are compared with a benchmark model in which the soil is represented as only one lumped parameter to test the contribution of the DRP in hydrological models. The proposed methodology is tested for 15 meso-scale river basins located in Luxembourg. The main goal of this study is to investigate if integrating dominant runoff processes, which have high information content concerning soil characteristics, with hydrological models allows the improvement of simulation results models with a view to regionalization and predictions in ungauged basins. The regionalization procedure gave no clear results. The calibration procedure and the well-mixed discharge signal of the calibration basins are considered major causes for this and it made the deconvolution of discharge signals of meso-scale basins problematic. From the results it is also suggested that DRP could very well display some sort of uniqueness of place, which was not foreseen in the methods from which they were derived. Furthermore, a strong seasonal influence on model performance was observed, implying a seasonal dependence of the DRP. When comparing the performance between the DRP models and the benchmark model no real distinction was found. To improve the performance of the DRP models, which are used in this study and also for then use of conceptual models in general, there is a need for an improved identification of the mechanisms that cause the different dominant runoff processes at the meso-scale. To achieve this, more orthogonal data could be of use for a better conceptualization of the DRPs. Then, models concepts should be adapted accordingly.

  20. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  1. Comparative evaluation of urban storm water quality models

    NASA Astrophysics Data System (ADS)

    Vaze, J.; Chiew, Francis H. S.

    2003-10-01

    The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.

  2. Numerical and experimental studies on effects of moisture content on combustion characteristics of simulated municipal solid wastes in a fixed bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Rui, E-mail: Sunsr@hit.edu.cn; Ismail, Tamer M., E-mail: temoil@aucegypt.edu; Ren, Xiaohan

    Highlights: • The effects of moisture content on the burning process of MSW are investigated. • A two-dimensional mathematical model was built to simulate the combustion process. • Temperature distributions, process rates, gas species were measured and simulated. • The The conversion ratio of C/CO and N/NO in MSW are inverse to moisture content. - Abstract: In order to reveal the features of the combustion process in the porous bed of a waste incinerator, a two-dimensional unsteady state model and experimental study were employed to investigate the combustion process in a fixed bed of municipal solid waste (MSW) on themore » combustion process in a fixed bed reactor. Conservation equations of the waste bed were implemented to describe the incineration process. The gas phase turbulence was modeled using the k–ε turbulent model and the particle phase was modeled using the kinetic theory of granular flow. The rate of moisture evaporation, devolatilization rate, and char burnout was calculated according to the waste property characters. The simulation results were then compared with experimental data for different moisture content of MSW, which shows that the incineration process of waste in the fixed bed is reasonably simulated. The simulation results of solid temperature, gas species and process rate in the bed are accordant with experimental data. Due to the high moisture content of fuel, moisture evaporation consumes a vast amount of heat, and the evaporation takes up most of the combustion time (about 2/3 of the whole combustion process). The whole bed combustion process reduces greatly as MSW moisture content increases. The experimental and simulation results provide direction for design and optimization of the fixed bed of MSW.« less

  3. Modeling and analysis of power processing systems: Feasibility investigation and formulation of a methodology

    NASA Technical Reports Server (NTRS)

    Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.

    1974-01-01

    A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.

  4. Quantifying the sensitivity of feedstock properties and process conditions on hydrochar yield, carbon content, and energy content.

    PubMed

    Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D

    2018-08-01

    Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Correcting Inadequate Model Snow Process Descriptions Dramatically Improves Mountain Hydrology Simulations

    NASA Astrophysics Data System (ADS)

    Pomeroy, J. W.; Fang, X.

    2014-12-01

    The vast effort in hydrology devoted to parameter calibration as a means to improve model performance assumes that the models concerned are not fundamentally wrong. By focussing on finding optimal parameter sets and ascribing poor model performance to parameter or data uncertainty, these efforts may fail to consider the need to improve models with more intelligent descriptions of hydrological processes. To test this hypothesis, a flexible physically based hydrological model including a full suite of snow hydrology processes as well as warm season, hillslope and groundwater hydrology was applied to Marmot Creek Research Basin, Canadian Rocky Mountains where excellent driving meteorology and basin biophysical descriptions exist. Model parameters were set from values found in the basin or from similar environments; no parameters were calibrated. The model was tested against snow surveys and streamflow observations. The model used algorithms that describe snow redistribution, sublimation and forest canopy effects on snowmelt and evaporative processes that are rarely implemented in hydrological models. To investigate the contribution of these processes to model predictive capability, the model was "falsified" by deleting parameterisations for forest canopy snow mass and energy, blowing snow, intercepted rain evaporation, and sublimation. Model falsification by ignoring forest canopy processes contributed to a large increase in SWE errors for forested portions of the research basin with RMSE increasing from 19 to 55 mm and mean bias (MB) increasing from 0.004 to 0.62. In the alpine tundra portion, removing blowing processes resulted in an increase in model SWE MB from 0.04 to 2.55 on north-facing slopes and -0.006 to -0.48 on south-facing slopes. Eliminating these algorithms degraded streamflow prediction with the Nash Sutcliffe efficiency dropping from 0.58 to 0.22 and MB increasing from 0.01 to 0.09. These results show dramatic model improvements by including snow redistribution and melt processes associated with wind transport and forest canopies. As most hydrological models do not currently include these processes, it is suggested that modellers first improve the realism of model structures before trying to optimise what are inherently inadequate simulations of hydrology.

  6. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  7. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  8. Model calibration and validation for OFMSW and sewage sludge co-digestion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esposito, G., E-mail: giovanni.esposito@unicas.it; Frunzo, L., E-mail: luigi.frunzo@unina.it; Panico, A., E-mail: anpanico@unina.it

    2011-12-15

    Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Watermore » Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can thus be used to assess the treatment efficiency and predict the methane production of full-scale digesters.« less

  9. CrossTalk. The Journal of Defense Software Engineering. Volume 25, Number 3

    DTIC Science & Technology

    2012-06-01

    OMG) standard Business Process Modeling and Nota- tion ( BPMN ) [6] graphical notation. I will address each of these: identify and document steps...to a value stream map using BPMN and textual process narratives. The resulting process narratives or process metadata includes key information...objectives. Once the processes are identified we can graphically document them capturing the process using BPMN (see Figure 1). The BPMN models

  10. NARSTO critical review of photochemical models and modeling

    NASA Astrophysics Data System (ADS)

    Russell, Armistead; Dennis, Robin

    Photochemical air quality models play a central role in both schentific investigation of how pollutants evlove in the atmosphere as well as developing policies to manage air quality. In the past 30 years, these models have evolved from rather crude representations of the physics and chemistry impacting trace species to their current state: comprehensive, but not complete. The evolution has included advancements in not only the level of process descriptions, but also the computational implementation, including numerical methods. As part of the NARSTO Critical Reviews, this article discusses the current strengths and weaknesses of air quality models and the modeling process. Current Eulerian models are found to represent well the primary processes impacting the evolution of trace species in most cases though some exceptions may exist. For example, sub-grid-scale processes, such as concentrated power plant plumes, are treated only approximately. It is not apparent how much such approximations affect their results and the polices based upon those results. A significant weakness has been in how investigators have addressed, and communicated, such uncertainties. Studies find that major uncertainties are due to model inputs, e.g., emissions and meteorology, more so than the model itself. One of the primary weakness identified is in the modeling process, not the models. Evaluation has been limited both due to data constraints. Seldom is there ample observational data to conduct a detailed model intercomparison using consistent data (e.g., the same emissions and meteorology). Further model advancement, and development of greater confidence in the use of models, is hampered by the lack of thorough evaluation and intercomparisons. Model advances are seen in the use of new tools for extending the interpretation of model results, e.g., process and sensitivity analysis, modeling systems to facilitate their use, and extension of model capabilities, e.g., aerosol dynamics capabilities and sub-grid-scale representations. Another possible direction that is the development and widespread use of a community model acting as a platform for multiple groups and agencies to collaborate and progress more rapidly.

  11. Probabilistic modeling of discourse-aware sentence processing.

    PubMed

    Dubey, Amit; Keller, Frank; Sturt, Patrick

    2013-07-01

    Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.

  12. Improving evapotranspiration processes in distrubing hydrological models using Remote Sensing derived ET products.

    NASA Astrophysics Data System (ADS)

    Abitew, T. A.; van Griensven, A.; Bauwens, W.

    2015-12-01

    Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.

  13. Modeling and Simulation of Quenching and Tempering Process in steels

    NASA Astrophysics Data System (ADS)

    Deng, Xiaohu; Ju, Dongying

    Quenching and tempering (Q&T) is a combined heat treatment process to achieve maximum toughness and ductility at a specified hardness and strength. It is important to develop a mathematical model for quenching and tempering process for satisfy requirement of mechanical properties with low cost. This paper presents a modified model to predict structural evolution and hardness distribution during quenching and tempering process of steels. The model takes into account tempering parameters, carbon content, isothermal and non-isothermal transformations. Moreover, precipitation of transition carbides, decomposition of retained austenite and precipitation of cementite can be simulated respectively. Hardness distributions of quenched and tempered workpiece are predicted by experimental regression equation. In order to validate the model, it is employed to predict the tempering of 80MnCr5 steel. The predicted precipitation dynamics of transition carbides and cementite is consistent with the previous experimental and simulated results from literature. Then the model is implemented within the framework of the developed simulation code COSMAP to simulate microstructure, stress and distortion in the heat treated component. It is applied to simulate Q&T process of J55 steel. The calculated results show a good agreement with the experimental ones. This agreement indicates that the model is effective for simulation of Q&T process of steels.

  14. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  15. Information Processing and Risk Perception: An Adaptation of the Heuristic-Systematic Model.

    ERIC Educational Resources Information Center

    Trumbo, Craig W.

    2002-01-01

    Describes heuristic-systematic information-processing model and risk perception--the two major conceptual areas of the analysis. Discusses the proposed model, describing the context of the data collections (public health communication involving cancer epidemiology) and providing the results of a set of three replications using the proposed model.…

  16. Systematic Modeling versus the Learning Cycle: Comparative Effects of Integrated Science Process Skill Achievement.

    ERIC Educational Resources Information Center

    Norman, John T.

    1992-01-01

    Reports effectiveness of modeling as teaching strategy on learning science process skills. Teachers of urban sixth through ninth grade students were taught modeling techniques; two sets of teachers served as controls. Results indicate students taught by teachers employing modeling instruction exhibited significantly higher competence in process…

  17. Process Modeling and Dynamic Simulation for EAST Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lu, Xiaofei; Fu, Peng; Zhuang, Ming; Qiu, Lilong; Hu, Liangbing

    2016-06-01

    In this paper, the process modeling and dynamic simulation for the EAST helium refrigerator has been completed. The cryogenic process model is described and the main components are customized in detail. The process model is controlled by the PLC simulator, and the realtime communication between the process model and the controllers is achieved by a customized interface. Validation of the process model has been confirmed based on EAST experimental data during the cool down process of 300-80 K. Simulation results indicate that this process simulator is able to reproduce dynamic behaviors of the EAST helium refrigerator very well for the operation of long pulsed plasma discharge. The cryogenic process simulator based on control architecture is available for operation optimization and control design of EAST cryogenic systems to cope with the long pulsed heat loads in the future. supported by National Natural Science Foundation of China (No. 51306195) and Key Laboratory of Cryogenics, Technical Institute of Physics and Chemistry, CAS (No. CRYO201408)

  18. Process Correlation Analysis Model for Process Improvement Identification

    PubMed Central

    Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data. PMID:24977170

  19. Process correlation analysis model for process improvement identification.

    PubMed

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  20. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    NASA Astrophysics Data System (ADS)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  1. Multi-compartmental modeling of SORLA’s influence on amyloidogenic processing in Alzheimer’s disease

    PubMed Central

    2012-01-01

    Background Proteolytic breakdown of the amyloid precursor protein (APP) by secretases is a complex cellular process that results in formation of neurotoxic Aβ peptides, causative of neurodegeneration in Alzheimer’s disease (AD). Processing involves monomeric and dimeric forms of APP that traffic through distinct cellular compartments where the various secretases reside. Amyloidogenic processing is also influenced by modifiers such as sorting receptor-related protein (SORLA), an inhibitor of APP breakdown and major AD risk factor. Results In this study, we developed a multi-compartment model to simulate the complexity of APP processing in neurons and to accurately describe the effects of SORLA on these processes. Based on dose–response data, our study concludes that SORLA specifically impairs processing of APP dimers, the preferred secretase substrate. In addition, SORLA alters the dynamic behavior of β-secretase, the enzyme responsible for the initial step in the amyloidogenic processing cascade. Conclusions Our multi-compartment model represents a major conceptual advance over single-compartment models previously used to simulate APP processing; and it identified APP dimers and β-secretase as the two distinct targets of the inhibitory action of SORLA in Alzheimer’s disease. PMID:22727043

  2. Fracture analysis of a central crack in a long cylindrical superconductor with exponential model

    NASA Astrophysics Data System (ADS)

    Zhao, Yu Feng; Xu, Chi

    2018-05-01

    The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.

  3. Numerical modelling of fluid-rock interactions: Lessons learnt from carbonate rocks diagenesis studies

    NASA Astrophysics Data System (ADS)

    Nader, Fadi; Bachaud, Pierre; Michel, Anthony

    2015-04-01

    Quantitative assessment of fluid-rock interactions and their impact on carbonate host-rocks has recently become a very attractive research topic within academic and industrial realms. Today, a common operational workflow that aims at predicting the relevant diagenetic processes on the host rocks (i.e. fluid-rock interactions) consists of three main stages: i) constructing a conceptual diagenesis model including inferred preferential fluids pathways; ii) quantifying the resulted diagenetic phases (e.g. depositing cements, dissolved and recrystallized minerals); and iii) numerical modelling of diagenetic processes. Most of the concepts of diagenetic processes operate at the larger, basin-scale, however, the description of the diagenetic phases (products of such processes) and their association with the overall petrophysical evolution of sedimentary rocks remain at reservoir (and even outcrop/ well core) scale. Conceptual models of diagenetic processes are thereafter constructed based on studying surface-exposed rocks and well cores (e.g. petrography, geochemistry, fluid inclusions). We are able to quantify the diagenetic products with various evolving techniques and on varying scales (e.g. point-counting, 2D and 3D image analysis, XRD, micro-CT and pore network models). Geochemical modelling makes use of thermodynamic and kinetic rules as well as data-bases to simulate chemical reactions and fluid-rock interactions. This can be through a 0D model, whereby a certain process is tested (e.g. the likelihood of a certain chemical reaction to operate under specific conditions). Results relate to the fluids and mineral phases involved in the chemical reactions. They could be used as arguments to support or refute proposed outcomes of fluid-rock interactions. Coupling geochemical modelling with transport (reactive transport model; 1D, 2D and 3D) is another possibility, attractive as it provides forward simulations of diagenetic processes and resulting phases. This contribution is based on several studies that were undertaken on carbonate rocks diagenesis in some of the major reservoir rocks in the Middle East and outcrop analogues in Europe. Here, the main processes at hand are related to fracture-related dolomitization and carbonate dissolution. We would like to present the workflows we have followed and the questioning that resulted for a series of case studies. The way forward, seems evident as the integration of workflows and numerical modelling tools at different scales, bringing better constrains on the boundary data and less uncertainty.

  4. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.

  5. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  6. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  7. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  8. Superstructure-based Design and Optimization of Batch Biodiesel Production Using Heterogeneous Catalysts

    NASA Astrophysics Data System (ADS)

    Nuh, M. Z.; Nasir, N. F.

    2017-08-01

    Biodiesel as a fuel comprised of mono alkyl esters of long chain fatty acids derived from renewable lipid feedstock, such as vegetable oil and animal fat. Biodiesel production is complex process which need systematic design and optimization. However, no case study using the process system engineering (PSE) elements which are superstructure optimization of batch process, it involves complex problems and uses mixed-integer nonlinear programming (MINLP). The PSE offers a solution to complex engineering system by enabling the use of viable tools and techniques to better manage and comprehend the complexity of the system. This study is aimed to apply the PSE tools for the simulation of biodiesel process and optimization and to develop mathematical models for component of the plant for case A, B, C by using published kinetic data. Secondly, to determine economic analysis for biodiesel production, focusing on heterogeneous catalyst. Finally, the objective of this study is to develop the superstructure for biodiesel production by using heterogeneous catalyst. The mathematical models are developed by the superstructure and solving the resulting mixed integer non-linear model and estimation economic analysis by using MATLAB software. The results of the optimization process with the objective function of minimizing the annual production cost by batch process from case C is 23.2587 million USD. Overall, the implementation a study of process system engineering (PSE) has optimized the process of modelling, design and cost estimation. By optimizing the process, it results in solving the complex production and processing of biodiesel by batch.

  9. Evaluating Adult's Competency: Application of the Competency Assessment Process

    PubMed Central

    Tétreault, Sylvie; Landry, Marie-Pier

    2015-01-01

    Competency assessment of adults with cognitive impairment or mental illness is a complex process that can have significant consequences for their rights. Some models put forth in the scientific literature have been proposed to guide health and social service professionals with this assessment process, but none of these appear to be complete. A new model, the Competency Assessment Process (CAP), was presented and validated in other studies. This paper adds to this corpus by presenting both the CAP model and the results of a survey given to health and social service professionals on its practical application in their clinical practice. The survey was administered to 35 participants trained in assessing competency following the CAP model. The results show that 40% of participants use the CAP to guide their assessment and the majority of those who do not yet use it plan to do so in the future. A large majority of participants consider this to be a relevant model and believe that all interdisciplinary teams should use it. These results support the relevance of the CAP model. Further research is planned to continue the study of the application of CAP in healthcare facilities. PMID:26257978

  10. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  11. Textile composite processing science

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Hammond, Vincent H.; Kranbuehl, David E.; Hasko, Gregory H.

    1993-01-01

    A multi-dimensional model of the Resin Transfer Molding (RTM) process was developed for the prediction of the infiltration behavior of a resin into an anisotropic fiber preform. Frequency dependent electromagnetic sensing (FDEMS) was developed for in-situ monitoring of the RTM process. Flow visualization and mold filling experiments were conducted to verify sensor measurements and model predictions. Test results indicated good agreement between model predictions, sensor readings, and experimental data.

  12. Process-based modeling of silicate mineral weathering responses to increasing atmospheric CO2 and climate change

    NASA Astrophysics Data System (ADS)

    Banwart, Steven A.; Berg, Astrid; Beerling, David J.

    2009-12-01

    A mathematical model describes silicate mineral weathering processes in modern soils located in the boreal coniferous region of northern Europe. The process model results demonstrate a stabilizing biological feedback mechanism between atmospheric CO2 levels and silicate weathering rates as is generally postulated for atmospheric evolution. The process model feedback response agrees within a factor of 2 of that calculated by a weathering feedback function of the type generally employed in global geochemical carbon cycle models of the Earth's Phanerozoic CO2 history. Sensitivity analysis of parameter values in the process model provides insight into the key mechanisms that influence the strength of the biological feedback to weathering. First, the process model accounts for the alkalinity released by weathering, whereby its acceleration stabilizes pH at values that are higher than expected. Although the process model yields faster weathering with increasing temperature, because of activation energy effects on mineral dissolution kinetics at warmer temperature, the mineral dissolution rate laws utilized in the process model also result in lower dissolution rates at higher pH values. Hence, as dissolution rates increase under warmer conditions, more alkalinity is released by the weathering reaction, helping maintain higher pH values thus stabilizing the weathering rate. Second, the process model yields a relatively low sensitivity of soil pH to increasing plant productivity. This is due to more rapid decomposition of dissolved organic carbon (DOC) under warmer conditions. Because DOC fluxes strongly influence the soil water proton balance and pH, this increased decomposition rate dampens the feedback between productivity and weathering. The process model is most sensitive to parameters reflecting soil structure; depth, porosity, and water content. This suggests that the role of biota to influence these characteristics of the weathering profile is as important, if not more important, than the role of biota to influence mineral dissolution rates through changes in soil water chemistry. This process-modeling approach to quantify the biological weathering feedback to atmospheric CO2 demonstrates the potential for a far more mechanistic description of weathering feedback in simulations of the global geochemical carbon cycle.

  13. Nonlinear ultrasonic pulsed measurements and applications to metal processing and fatigue

    NASA Astrophysics Data System (ADS)

    Yost, William T.; Cantrell, John H.; Na, Jeong K.

    2001-04-01

    Nonlinear ultrasonics research at NASA-Langley Research Center emphasizes development of experimental techniques and modeling, with applications to metal fatigue and metals processing. This review work includes a summary of results from our recent efforts in technique refinement, modeling of fatigue related microstructure contributions, and measurements on fatigued turbine blades. Also presented are data on 17-4PH and 410-Cb stainless steels. The results are in good agreement with the models.

  14. Detailed Modeling of Distillation Technologies for Closed-Loop Water Recovery Systems

    NASA Technical Reports Server (NTRS)

    Allada, Rama Kumar; Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Detailed chemical process simulations are a useful tool in designing and optimizing complex systems and architectures for human life support. Dynamic and steady-state models of these systems help contrast the interactions of various operating parameters and hardware designs, which become extremely useful in trade-study analyses. NASA?s Exploration Life Support technology development project recently made use of such models to compliment a series of tests on different waste water distillation systems. This paper presents efforts to develop chemical process simulations for three technologies: the Cascade Distillation System (CDS), the Vapor Compression Distillation (VCD) system and the Wiped-Film Rotating Disk (WFRD) using the Aspen Custom Modeler and Aspen Plus process simulation tools. The paper discusses system design, modeling details, and modeling results for each technology and presents some comparisons between the model results and recent test data. Following these initial comparisons, some general conclusions and forward work are discussed.

  15. Development of a Water Recovery System Resource Tracking Model

    NASA Technical Reports Server (NTRS)

    Chambliss, Joe; Stambaugh, Imelda; Sarguishm, Miriam; Shull, Sarah; Moore, Michael

    2014-01-01

    A simulation model has been developed to track water resources in an exploration vehicle using regenerative life support (RLS) systems. The model integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the model results in the RTM being a part of of a complete vehicle simulation that can be used in real time mission studies. Performance data for the variety of components in the RTM is focused on water processing and has been defined based on the most recent information available for the technology of the component. This paper will describe the process of defining the RLS system to be modeled and then the way the modeling environment was selected and how the model has been implemented. Results showing how the variety of RLS components exchange water are provided in a set of test cases.

  16. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  17. Lattice Boltzmann simulations of immiscible displacement process with large viscosity ratios

    NASA Astrophysics Data System (ADS)

    Rao, Parthib; Schaefer, Laura

    2017-11-01

    Immiscible displacement is a key physical mechanism involved in enhanced oil recovery and carbon sequestration processes. This multiphase flow phenomenon involves a complex interplay of viscous, capillary, inertial and wettability effects. The lattice Boltzmann (LB) method is an accurate and efficient technique for modeling and simulating multiphase/multicomponent flows especially in complex flow configurations and media. In this presentation we present numerical simulation results of displacement process in thin long channels. The results are based on a new psuedo-potential multicomponent LB model with multiple relaxation time collision (MRT) model and explicit forcing scheme. We demonstrate that the proposed model is capable of accurately simulating the displacement process involving fluids with a wider range of viscosity ratios (>100) and which also leads to viscosity-independent interfacial tension and reduction of some important numerical artifacts.

  18. Modeling sediment transport after ditch network maintenance of a forested peatland

    NASA Astrophysics Data System (ADS)

    Haahti, K.; Marttila, H.; Warsta, L.; Kokkonen, T.; Finér, L.; Koivusalo, H.

    2016-11-01

    Elevated suspended sediment (SS) loads released from peatlands after drainage operations and the resulting negative effect on the ecological status of the receiving water bodies have been widely recognized. Understanding the processes controlling erosion and sediment transport within the ditch network forms a prerequisite for adequate sediment control. While numerous experimental studies have been reported in this field, model based assessments are rare. This study presents a modeling approach to investigate sediment transport in a peatland ditch network. The transport model describes bed erosion, rain-induced bank erosion, floc deposition, and consolidation of the bed. Coupled to a distributed hydrological model, sediment transport was simulated in a 5.2 ha forestry-drained peatland catchment for 2 years after ditch cleaning. Comparing simulation results to measured SS concentrations suggested that the loose peat material, produced during excavation, contributed markedly to elevated SS concentrations immediately after ditch cleaning. Both snowmelt and summer rainstorms contributed critically to annual loads. Springtime peat erosion during snowmelt was driven by ditch flow whereas during summer rainfalls, bank erosion by raindrop impact was identified as an important process. Relating modeling results to observed spatial topographic changes in the ditch network was challenging and the results were difficult to verify. Nevertheless, the model has potential to identify risk areas for erosion. The results demonstrate that modeling is effective in separating the importance of different processes and complements pure experimental approaches. Modeling results can aid planning and designing efficient sediment control measures and guide the focus of experimental studies.

  19. [Investigation of team processes that enhance team performance in business organization].

    PubMed

    Nawata, Kengo; Yamaguchi, Hiroyuki; Hatano, Toru; Aoshima, Mika

    2015-02-01

    Many researchers have suggested team processes that enhance team performance. However, past team process models were based on crew team, whose all team members perform an indivisible temporary task. These models may be inapplicable business teams, whose individual members perform middle- and long-term tasks assigned to individual members. This study modified the teamwork model of Dickinson and McIntyre (1997) and aimed to demonstrate a whole team process that enhances the performance of business teams. We surveyed five companies (member N = 1,400, team N = 161) and investigated team-level-processes. Results showed that there were two sides of team processes: "communication" and "collaboration to achieve a goal." Team processes in which communication enhanced collaboration improved team performance with regard to all aspects of the quantitative objective index (e.g., current income and number of sales), supervisor rating, and self-rating measurements. On the basis of these results, we discuss the entire process by which teamwork enhances team performance in business organizations.

  20. Nucleosynthesis in Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Stevenson, Taylor Shannon; Viktoria Ohstrom, Eva; Harris, James Austin; Hix, William R.

    2018-01-01

    The nucleosynthesis which occurs in core-collapse supernovae (CCSN) is one of the most important sources of elements in the universe. Elements from Oxygen through Iron come predominantly from supernovae, and contributions of heavier elements are also possible through processes like the weak r-process, the gamma process and the light element primary process. The composition of the ejecta depends on the mechanism of the explosion, thus simulations of high physical fidelity are needed to explore what elements and isotopes CCSN can contribute to Galactic Chemical Evolution. We will analyze the nucleosynthesis results from self-consistent CCSN simulations performed with CHIMERA, a multi-dimensional neutrino radiation-hydrodynamics code. Much of our understanding of CCSN nucleosynthesis comes from parameterized models, but unlike CHIMERA these fail to address essential physics, including turbulent flow/instability and neutrino-matter interaction. We will present nucleosynthesis predictions for the explosion of a 9.6 solar mass first generation star, relying both on results of the 160 species nuclear reaction network used in CHIMERA within this model and on post-processing with a more extensive network. The lowest mass iron core-collapse supernovae, like this model, are distinct from their more massive brethren, with their explosion mechanism and nucleosynthesis being more like electron capture supernovae resulting from Oxygen-Neon white dwarves. We will highlight the differences between the nucleosynthesis in this model and more massive supernovae. The inline 160 species network is a feature unique to CHIMERA, making this the most sophisticated model to date for a star of this type. We will discuss the need and mechanism to extrapolate the post-processing to times post-simulation and analyze the uncertainties this introduces for supernova nucleosynthesis. We will also compare the results from the inline 160 species network to the post-processing results to study further uncertainties introduced by post-processing. This work is supported by the U.S. Department of Energy, Office of Nuclear Physics, and the National Science Foundation Nuclear Theory Program (PHY-1516197).

  1. Accessible intergration of agriculture, groundwater, and economic models using the Open Modeling Interface (Open MI): methodology and initial results

    USDA-ARS?s Scientific Manuscript database

    Policy for water resources impacts not only hydrological processes, but the closely intertwined economic and social processes dependent on them. Understanding these process interactions across domains is an important step in establishing effective and sustainable policy. Multidisciplinary integrated...

  2. Application of a Model for Simulating the Vacuum Arc Remelting Process in Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Patel, Ashish; Tripp, David W.; Fiore, Daniel

    Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into system dynamics and to predict the effect of process modifications or upsets on final properties. This article describes the application of a 2-D mathematical VAR model presented in previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in Ti-6Al-4V ingots will be discussed. Model predictions were first validated against the measured characteristics of industrially produced ingots, and process inputs and model formulation were adjusted to match macro-etched pool shapes. The results are compared to published data in the literature. Finally, the model is used to examine ingot chemistry during successive VAR melts.

  3. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    NASA Astrophysics Data System (ADS)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  4. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  5. Understanding a Basic Biological Process: Expert and Novice Models of Science.

    ERIC Educational Resources Information Center

    Kindfield, A. C. H.

    1994-01-01

    Reports on the meiosis models utilized by five individuals at each of three levels of expertise in genetics as each reasoned about this process in an individual interview setting. Results revealed a set of biologically correct features common to all individuals' models as well as a variety of model flaws (i.e., meiosis misunderstandings) which are…

  6. Experimental comparison of residual stresses for a thermomechanical model for the simulation of selective laser melting

    DOE PAGES

    Hodge, N. E.; Ferencz, R. M.; Vignes, R. M.

    2016-05-30

    Selective laser melting (SLM) is an additive manufacturing process in which multiple, successive layers of metal powders are heated via laser in order to build a part. Modeling of SLM requires consideration of the complex interaction between heat transfer and solid mechanics. Here, the present work describes the authors initial efforts to validate their first generation model. In particular, the comparison of model-generated solid mechanics results, including both deformation and stresses, is presented. Additionally, results of various perturbations of the process parameters and modeling strategies are discussed.

  7. Intelligent Diagnosis of Degradation State under Corrosion

    NASA Astrophysics Data System (ADS)

    Isoc, Dorin; Ignat-Coman, Aurelian; Joldiş, Adrian

    2008-06-01

    The work presents an inter- and multi-disciplinary research where the diagnosis is treated by using the artificial intelligence means and the application the degradation state of buildings and urban power networks. A possible model of degradation process caused by the corrosion and the technical achievement manner is given. The notions of micro- and macro-modeling and model granularity are introduced and applied. For resulting model the specification of intelligent processing of information and further the knowledge for suggested model are prepared. As concluding remarks the results are analysed and interpreted and a generalized approach is suggested and argued.

  8. Evaluating Organic Aerosol Model Performance: Impact of two Embedded Assumptions

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Giroux, E.; Roth, H.; Yin, D.

    2004-05-01

    Organic aerosols are important due to their abundance in the polluted lower atmosphere and their impact on human health and vegetation. However, modeling organic aerosols is a very challenging task because of the complexity of aerosol composition, structure, and formation processes. Assumptions and their associated uncertainties in both models and measurement data make model performance evaluation a truly demanding job. Although some assumptions are obvious, others are hidden and embedded, and can significantly impact modeling results, possibly even changing conclusions about model performance. This paper focuses on analyzing the impact of two embedded assumptions on evaluation of organic aerosol model performance. One assumption is about the enthalpy of vaporization widely used in various secondary organic aerosol (SOA) algorithms. The other is about the conversion factor used to obtain ambient organic aerosol concentrations from measured organic carbon. These two assumptions reflect uncertainties in the model and in the ambient measurement data, respectively. For illustration purposes, various choices of the assumed values are implemented in the evaluation process for an air quality model based on CMAQ (the Community Multiscale Air Quality Model). Model simulations are conducted for the Lower Fraser Valley covering Southwest British Columbia, Canada, and Northwest Washington, United States, for a historical pollution episode in 1993. To understand the impact of the assumed enthalpy of vaporization on modeling results, its impact on instantaneous organic aerosol yields (IAY) through partitioning coefficients is analysed first. The analysis shows that utilizing different enthalpy of vaporization values causes changes in the shapes of IAY curves and in the response of SOA formation capability of reactive organic gases to temperature variations. These changes are then carried into the air quality model and cause substantial changes in the organic aerosol modeling results. In another aspect, using different assumed factors to convert measured organic carbon to organic aerosol concentrations cause substantial variations in the processed ambient data themselves, which are normally used as performance targets for model evaluations. The combination of uncertainties in the modeling results and in the moving performance targets causes major uncertainties in the final conclusion about the model performance. Without further information, the best thing that a modeler can do is to choose a combination of the assumed values from the sensible parameter ranges available in the literature, based on the best match of the modeling results with the processed measurement data. However, the best match of the modeling results with the processed measurement data may not necessarily guarantee that the model itself is rigorous and the model performance is robust. Conclusions on the model performance can only be reached with sufficient understanding of the uncertainties and their impact.

  9. A dual-process perspective on fluency-based aesthetics: the pleasure-interest model of aesthetic liking.

    PubMed

    Graf, Laura K M; Landwehr, Jan R

    2015-11-01

    In this article, we develop an account of how aesthetic preferences can be formed as a result of two hierarchical, fluency-based processes. Our model suggests that processing performed immediately upon encountering an aesthetic object is stimulus driven, and aesthetic preferences that accrue from this processing reflect aesthetic evaluations of pleasure or displeasure. When sufficient processing motivation is provided by a perceiver's need for cognitive enrichment and/or the stimulus' processing affordance, elaborate perceiver-driven processing can emerge, which gives rise to fluency-based aesthetic evaluations of interest, boredom, or confusion. Because the positive outcomes in our model are pleasure and interest, we call it the Pleasure-Interest Model of Aesthetic Liking (PIA Model). Theoretically, this model integrates a dual-process perspective and ideas from lay epistemology into processing fluency theory, and it provides a parsimonious framework to embed and unite a wealth of aesthetic phenomena, including contradictory preference patterns for easy versus difficult-to-process aesthetic stimuli. © 2015 by the Society for Personality and Social Psychology, Inc.

  10. Size-Class Effect Contributes to Tree Species Assembly through Influencing Dispersal in Tropical Forests

    PubMed Central

    Hu, Yue-Hua; Kitching, Roger L.; Lan, Guo-Yu; Zhang, Jiao-Lin; Sha, Li-Qing; Cao, Min

    2014-01-01

    We have investigated the processes of community assembly using size classes of trees. Specifically our work examined (1) whether point process models incorporating an effect of size-class produce more realistic summary outcomes than do models without this effect; (2) which of three selected models incorporating, respectively environmental effects, dispersal and the joint-effect of both of these, is most useful in explaining species-area relationships (SARs) and point dispersion patterns. For this evaluation we used tree species data from the 50-ha forest dynamics plot in Barro Colorado Island, Panama and the comparable 20 ha plot at Bubeng, Southwest China. Our results demonstrated that incorporating an size-class effect dramatically improved the SAR estimation at both the plots when the dispersal only model was used. The joint effect model produced similar improvement but only for the 50-ha plot in Panama. The point patterns results were not improved by incorporation of size-class effects using any of the three models. Our results indicate that dispersal is likely to be a key process determining both SARs and point patterns. The environment-only model and joint-effects model were effective at the species level and the community level, respectively. We conclude that it is critical to use multiple summary characteristics when modelling spatial patterns at the species and community levels if a comprehensive understanding of the ecological processes that shape species’ distributions is sought; without this results may have inherent biases. By influencing dispersal, the effect of size-class contributes to species assembly and enhances our understanding of species coexistence. PMID:25251538

  11. Understanding a Basic Biological Process: Expert and Novice Models of Meiosis.

    ERIC Educational Resources Information Center

    Kindfield, Ann C. H.

    The results of a study of the meiosis models utilized by individuals at varying levels of expertise while reasoning about the process of meiosis are presented. Based on these results, the issues of sources of misconceptions/difficulties and the construction of a sound understanding of meiosis are discussed. Five individuals from each of three…

  12. Terrorism as a process: a critical review of Moghaddam's "Staircase to Terrorism".

    PubMed

    Lygre, Ragnhild B; Eid, Jarle; Larsson, Gerry; Ranstorp, Magnus

    2011-12-01

    This study reviews empirical evidence for Moghaddam's model "Staircase to Terrorism," which portrays terrorism as a process of six consecutive steps culminating in terrorism. An extensive literature search, where 2,564 publications on terrorism were screened, resulted in 38 articles which were subject to further analysis. The results showed that while most of the theories and processes linked to Moghaddam's model are supported by empirical evidence, the proposed transitions between the different steps are not. These results may question the validity of a linear stepwise model and may suggest that a combination of mechanisms/factors could combine in different ways to produce terrorism. © 2011 The Authors. Scandinavian Journal of Psychology © 2011 The Scandinavian Psychological Associations.

  13. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.

  14. Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes

    PubMed Central

    Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Óscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín

    2008-01-01

    Background Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. Methods With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. Results The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. Conclusion The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals. PMID:18673511

  15. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  16. On Boiling of Crude Oil under Elevated Pressure

    NASA Astrophysics Data System (ADS)

    Pimenova, Anastasiya V.; Goldobin, Denis S.

    2016-02-01

    We construct a thermodynamic model for theoretical calculation of the boiling process of multicomponent mixtures of hydrocarbons (e.g., crude oil). The model governs kinetics of the mixture composition in the course of the distillation process along with the boiling temperature increase. The model heavily relies on the theory of dilute solutions of gases in liquids. Importantly, our results are applicable for modelling the process under elevated pressure (while the empiric models for oil cracking are not scalable to the case of extreme pressure), such as in an oil field heated by lava intrusions.

  17. Implementation of a GPS-RO data processing system for the KIAPS-LETKF data assimilation system

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Kang, J.-S.; Jo, Y.; Kang, J. H.

    2014-11-01

    The Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing a new global numerical weather prediction model and an advanced data assimilation system. As part of the KIAPS Package for Observation Processing (KPOP) system for data assimilation, preprocessing and quality control modules for bending angle measurements of global positioning system radio occultation (GPS-RO) data have been implemented and examined. GPS-RO data processing system is composed of several steps for checking observation locations, missing values, physical values for Earth radius of curvature, and geoid undulation. An observation-minus-background check is implemented by use of a one-dimensional observational bending angle operator and tangent point drift is also considered in the quality control process. We have tested GPS-RO observations utilized by the Korean Meteorological Administration (KMA) within KPOP, based on both the KMA global model and the National Center for Atmospheric Research (NCAR) Community Atmosphere Model-Spectral Element (CAM-SE) as a model background. Background fields from the CAM-SE model are incorporated for the preparation of assimilation experiments with the KIAPS-LETKF data assimilation system, which has been successfully implemented to a cubed-sphere model with fully unstructured quadrilateral meshes. As a result of data processing, the bending angle departure statistics between observation and background shows significant improvement. Also, the first experiment in assimilating GPS-RO bending angle resulting from KPOP within KIAPS-LETKF shows encouraging results.

  18. Testing the Causal Mediation Component of Dodge's Social Information Processing Model of Social Competence and Depression

    ERIC Educational Resources Information Center

    Possel, Patrick; Seemann, Simone; Ahrens, Stefanie; Hautzinger, Martin

    2006-01-01

    In Dodge's model of "social information processing" depression is the result of a linear sequence of five stages of information processing ("Annu Rev Psychol" 44: 559-584, 1993). These stages follow a person's reaction to situational stimuli, such that each stage of information processing mediates the relationship between earlier and later stages.…

  19. In vitro protease cleavage and computer simulations reveal the HIV-1 capsid maturation pathway

    NASA Astrophysics Data System (ADS)

    Ning, Jiying; Erdemci-Tandogan, Gonca; Yufenyuy, Ernest L.; Wagner, Jef; Himes, Benjamin A.; Zhao, Gongpu; Aiken, Christopher; Zandi, Roya; Zhang, Peijun

    2016-12-01

    HIV-1 virions assemble as immature particles containing Gag polyproteins that are processed by the viral protease into individual components, resulting in the formation of mature infectious particles. There are two competing models for the process of forming the mature HIV-1 core: the disassembly and de novo reassembly model and the non-diffusional displacive model. To study the maturation pathway, we simulate HIV-1 maturation in vitro by digesting immature particles and assembled virus-like particles with recombinant HIV-1 protease and monitor the process with biochemical assays and cryoEM structural analysis in parallel. Processing of Gag in vitro is accurate and efficient and results in both soluble capsid protein and conical or tubular capsid assemblies, seemingly converted from immature Gag particles. Computer simulations further reveal probable assembly pathways of HIV-1 capsid formation. Combining the experimental data and computer simulations, our results suggest a sequential combination of both displacive and disassembly/reassembly processes for HIV-1 maturation.

  20. Information Network Model Query Processing

    NASA Astrophysics Data System (ADS)

    Song, Xiaopu

    Information Networking Model (INM) [31] is a novel database model for real world objects and relationships management. It naturally and directly supports various kinds of static and dynamic relationships between objects. In INM, objects are networked through various natural and complex relationships. INM Query Language (INM-QL) [30] is designed to explore such information network, retrieve information about schema, instance, their attributes, relationships, and context-dependent information, and process query results in the user specified form. INM database management system has been implemented using Berkeley DB, and it supports INM-QL. This thesis is mainly focused on the implementation of the subsystem that is able to effectively and efficiently process INM-QL. The subsystem provides a lexical and syntactical analyzer of INM-QL, and it is able to choose appropriate evaluation strategies and index mechanism to process queries in INM-QL without the user's intervention. It also uses intermediate result structure to hold intermediate query result and other helping structures to reduce complexity of query processing.

  1. Bidirectional optimization of the melting spinning process.

    PubMed

    Liang, Xiao; Ding, Yongsheng; Wang, Zidong; Hao, Kuangrong; Hone, Kate; Wang, Huaping

    2014-02-01

    A bidirectional optimizing approach for the melting spinning process based on an immune-enhanced neural network is proposed. The proposed bidirectional model can not only reveal the internal nonlinear relationship between the process configuration and the quality indices of the fibers as final product, but also provide a tool for engineers to develop new fiber products with expected quality specifications. A neural network is taken as the basis for the bidirectional model, and an immune component is introduced to enlarge the searching scope of the solution field so that the neural network has a larger possibility to find the appropriate and reasonable solution, and the error of prediction can therefore be eliminated. The proposed intelligent model can also help to determine what kind of process configuration should be made in order to produce satisfactory fiber products. To make the proposed model practical to the manufacturing, a software platform is developed. Simulation results show that the proposed model can eliminate the approximation error raised by the neural network-based optimizing model, which is due to the extension of focusing scope by the artificial immune mechanism. Meanwhile, the proposed model with the corresponding software can conduct optimization in two directions, namely, the process optimization and category development, and the corresponding results outperform those with an ordinary neural network-based intelligent model. It is also proved that the proposed model has the potential to act as a valuable tool from which the engineers and decision makers of the spinning process could benefit.

  2. Multi-Disciplinary, Multi-Fidelity Discrete Data Transfer Using Degenerate Geometry Forms

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2016-01-01

    In a typical multi-fidelity design process, different levels of geometric abstraction are used for different analysis methods, and transitioning from one phase of design to the next often requires a complete re-creation of the geometry. To maintain consistency between lower-order and higher-order analysis results, Vehicle Sketch Pad (OpenVSP) recently introduced the ability to generate and export several degenerate forms of the geometry, representing the type of abstraction required to perform low- to medium-order analysis for a range of aeronautical disciplines. In this research, the functionality of these degenerate models was extended, so that in addition to serving as repositories for the geometric information that is required as input to an analysis, the degenerate models can also store the results of that analysis mapped back onto the geometric nodes. At the same time, the results are also mapped indirectly onto the nodes of lower-order degenerate models using a process called aggregation, and onto higher-order models using a process called disaggregation. The mapped analysis results are available for use by any subsequent analysis in an integrated design and analysis process. A simple multi-fidelity analysis process for a single-aisle subsonic transport aircraft is used as an example case to demonstrate the value of the approach.

  3. Modelling of fluoride removal via batch monopolar electrocoagulation process using aluminium electrodes

    NASA Astrophysics Data System (ADS)

    Amri, N.; Hashim, M. I.; Ismail, N.; Rohman, F. S.; Bashah, N. A. A.

    2017-09-01

    Electrocoagulation (EC) is a promising technology that extensively used to remove fluoride ions efficiently from industrial wastewater. However, it has received very little consideration and understanding on mechanism and factors that affecting the fluoride removal process. In order to determine the efficiency of fluoride removal in EC process, the effect of operating parameters such as voltage and electrolysis time were investigated in this study. A batch experiment with monopolar aluminium electrodes was conducted to identify the model of fluoride removal using empirical model equation. The EC process was investigated using several parameters which include voltage (3 - 12 V) and electrolysis time (0 - 60 minutes) at a constant initial fluoride concentration of 25 mg/L. The result shows that the fluoride removal efficiency increased steadily with increasing voltage and electrolysis time. The best fluoride removal efficiency was obtained with 94.8 % removal at 25 mg/L initial fluoride concentration, voltage of 12 V and 60 minutes electrolysis time. The results indicated that the rate constant, k and number of order, n decreased as the voltage increased. The rate of fluoride removal model was developed based on the empirical model equation using the correlation of k and n. Overall, the result showed that EC process can be considered as a potential alternative technology for fluoride removal in wastewater.

  4. Two-step infiltration of aluminum melts into Al-Ti-B4C-CuO powder mixture pellets

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Lee, Jung-Moo; Cho, Young-Hee; Kim, Su-Hyeon; Yu, Huashun

    2016-03-01

    Aluminum matrix composites with a high volume fraction of B4C and TiB2 were fabricated by a novel processing technique - a quick spontaneous infiltration process. The process combines a pressureless infiltration with the combustion reaction of Al-Ti-B4C-CuO in molten aluminum. The process is realized in a simple and economical way in which the whole process is performed in air in a few minutes. To verify the rapidity of the process, the infiltration kinetics was calculated based on the Washburn equation in which melt flows into a porous skeleton. However, there was a noticeable deviation from the calculated results with the experimental results. Considering the cross-sections of the samples at different processing times, a new infiltration model (two step infiltration) consisting of macro-infiltration and micro-infiltration is suggested. The calculated kinetics results in light of the proposed model agree well with the experimental results.

  5. Use NU-WRF and GCE Model to Simulate the Precipitation Processes During MC3E Campaign

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Matsui, Toshi; Li, Xiaowen; Zeng, Xiping; Peter-Lidard, Christa; Hou, Arthur

    2012-01-01

    One of major CRM approaches to studying precipitation processes is sometimes referred to as "cloud ensemble modeling". This approach allows many clouds of various sizes and stages of their lifecycles to be present at any given simulation time. Large-scale effects derived from observations are imposed into CRMs as forcing, and cyclic lateral boundaries are used. The advantage of this approach is that model results in terms of rainfall and QI and Q2 usually are in good agreement with observations. In addition, the model results provide cloud statistics that represent different types of clouds/cloud systems during their lifetime (life cycle). The large-scale forcing derived from MC3EI will be used to drive GCE model simulations. The model-simulated results will be compared with observations from MC3E. These GCE model-simulated datasets are especially valuable for LH algorithm developers. In addition, the regional scale model with very high-resolution, NASA Unified WRF is also used to real time forecast during the MC3E campaign to ensure that the precipitation and other meteorological forecasts are available to the flight planning team and to interpret the forecast results in terms of proposed flight scenarios. Post Mission simulations are conducted to examine the sensitivity of initial and lateral boundary conditions to cloud and precipitation processes and rainfall. We will compare model results in terms of precipitation and surface rainfall using GCE model and NU-WRF

  6. Framework for developing hybrid process-driven, artificial neural network and regression models for salinity prediction in river systems

    NASA Astrophysics Data System (ADS)

    Hunter, Jason M.; Maier, Holger R.; Gibbs, Matthew S.; Foale, Eloise R.; Grosvenor, Naomi A.; Harders, Nathan P.; Kikuchi-Miller, Tahali C.

    2018-05-01

    Salinity modelling in river systems is complicated by a number of processes, including in-stream salt transport and various mechanisms of saline accession that vary dynamically as a function of water level and flow, often at different temporal scales. Traditionally, salinity models in rivers have either been process- or data-driven. The primary problem with process-based models is that in many instances, not all of the underlying processes are fully understood or able to be represented mathematically. There are also often insufficient historical data to support model development. The major limitation of data-driven models, such as artificial neural networks (ANNs) in comparison, is that they provide limited system understanding and are generally not able to be used to inform management decisions targeting specific processes, as different processes are generally modelled implicitly. In order to overcome these limitations, a generic framework for developing hybrid process and data-driven models of salinity in river systems is introduced and applied in this paper. As part of the approach, the most suitable sub-models are developed for each sub-process affecting salinity at the location of interest based on consideration of model purpose, the degree of process understanding and data availability, which are then combined to form the hybrid model. The approach is applied to a 46 km reach of the Murray River in South Australia, which is affected by high levels of salinity. In this reach, the major processes affecting salinity include in-stream salt transport, accession of saline groundwater along the length of the reach and the flushing of three waterbodies in the floodplain during overbank flows of various magnitudes. Based on trade-offs between the degree of process understanding and data availability, a process-driven model is developed for in-stream salt transport, an ANN model is used to model saline groundwater accession and three linear regression models are used to account for the flushing of the different floodplain storages. The resulting hybrid model performs very well on approximately 3 years of daily validation data, with a Nash-Sutcliffe efficiency (NSE) of 0.89 and a root mean squared error (RMSE) of 12.62 mg L-1 (over a range from approximately 50 to 250 mg L-1). Each component of the hybrid model results in noticeable improvements in model performance corresponding to the range of flows for which they are developed. The predictive performance of the hybrid model is significantly better than that of a benchmark process-driven model (NSE = -0.14, RMSE = 41.10 mg L-1, Gbench index = 0.90) and slightly better than that of a benchmark data-driven (ANN) model (NSE = 0.83, RMSE = 15.93 mg L-1, Gbench index = 0.36). Apart from improved predictive performance, the hybrid model also has advantages over the ANN benchmark model in terms of increased capacity for improving system understanding and greater ability to support management decisions.

  7. Development strategy and process models for phased automation of design and digital manufacturing electronics

    NASA Astrophysics Data System (ADS)

    Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.

    2018-03-01

    The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.

  8. On the Modeling of Vacuum Arc Remelting Process in Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Patel, Ashish; Fiore, Daniel

    2016-07-01

    Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into the effect of process parameters on final properties. This article describes the application of a 2-D mathematical VAR model presented at previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in a Ti-6Al-4V ingot is discussed. Model predictions are validated against published data from a industrial size ingot, and results of a parametric study on particle dissolution are also discussed.

  9. Detection of LiveLock in BPMN Using Process Expression

    NASA Astrophysics Data System (ADS)

    Tantitharanukul, Nasi; Jumpamule, Watcharee

    Although the Business Process Modeling Notation (BPMN) is a popular tool for modeling business process in conceptual level, the result diagram may contain structural problem. One of the structural problems is livelock. In this problem, one token proceeds to end event, while other token is still in process with no progression. In this paper, we introduce an expression liked method to detect livelock in the BPMN diagram. Our approach utilizes the power of the declarative ability of expression to determine all of the possible process chains, and indicate whether there are livelock or not. As a result, we have shown that our method can detect livelock, if any.

  10. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  11. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    NASA Astrophysics Data System (ADS)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  12. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  13. Numerical Simulation and Experimental Casting of Nickel-Based Single-Crystal Superalloys by HRS and LMC Directional Solidification Processes

    NASA Astrophysics Data System (ADS)

    Yan, Xuewei; Wang, Run'nan; Xu, Qingyan; Liu, Baicheng

    2017-04-01

    Mathematical models for dynamic heat radiation and convection boundary in directional solidification processes are established to simulate the temperature fields. Cellular automaton (CA) method and Kurz-Giovanola-Trivedi (KGT) growth model are used to describe nucleation and growth. Primary dendritic arm spacing (PDAS) and secondary dendritic arm spacing (SDAS) are calculated by the Ma-Sham (MS) and Furer-Wunderlin (FW) models respectively. The mushy zone shape is investigated based on the temperature fields, for both high-rate solidification (HRS) and liquid metal cooling (LMC) processes. The evolution of the microstructure and crystallographic orientation are analyzed by simulation and electron back-scattered diffraction (EBSD) technique, respectively. Comparison of the simulation results from PDAS and SDAS with experimental results reveals a good agreement with each other. The results show that LMC process can provide both dendritic refinement and superior performance for castings due to the increased cooling rate and thermal gradient.

  14. Business Performer-Centered Design of User Interfaces

    NASA Astrophysics Data System (ADS)

    Sousa, Kênia; Vanderdonckt, Jean

    Business Performer-Centered Design of User Interfaces is a new design methodology that adopts business process (BP) definition and a business performer perspective for managing the life cycle of user interfaces of enterprise systems. In this methodology, when the organization has a business process culture, the business processes of an organization are firstly defined according to a traditional methodology for this kind of artifact. These business processes are then transformed into a series of task models that represent the interactive parts of the business processes that will ultimately lead to interactive systems. When the organization has its enterprise systems, but not yet its business processes modeled, the user interfaces of the systems help derive tasks models, which are then used to derive the business processes. The double linking between a business process and a task model, and between a task model and a user interface model makes it possible to ensure traceability of the artifacts in multiple paths and enables a more active participation of business performers in analyzing the resulting user interfaces. In this paper, we outline how a human-perspective is used tied to a model-driven perspective.

  15. The impact of stakeholder involvement in hospital policy decision-making: a study of the hospital's business processes.

    PubMed

    Malfait, Simon; Van Hecke, Ann; Hellings, Johan; De Bodt, Griet; Eeckloo, Kristof

    2017-02-01

    In many health care systems, strategies are currently deployed to engage patients and other stakeholders in decisions affecting hospital services. In this paper, a model for stakeholder involvement is presented and evaluated in three Flemish hospitals. In the model, a stakeholder committee advises the hospital's board of directors on themes of strategic importance. To study the internal hospital's decision processes in order to identify the impact of a stakeholder involvement committee on strategic themes in the hospital decision processes. A retrospective analysis of the decision processes was conducted in three hospitals that implemented a stakeholder committee. The analysis consisted of process and outcome evaluation. Fifteen themes were discussed in the stakeholder committees, whereof 11 resulted in a considerable change. None of these were on a strategic level. The theoretical model was not applied as initially developed, but was altered by each hospital. Consequentially, the decision processes differed between the hospitals. Despite alternation of the model, the stakeholder committee showed a meaningful impact in all hospitals on the operational level. As a result of the differences in decision processes, three factors could be identified as facilitators for success: (1) a close interaction with the board of executives, (2) the inclusion of themes with a more practical and patient-oriented nature, and (3) the elaboration of decisions on lower echelons of the organization. To effectively influence the organization's public accountability, hospitals should involve stakeholders in the decision-making process of the organization. The model of a stakeholder committee was not applied as initially developed and did not affect the strategic decision-making processes in the involved hospitals. Results show only impact at the operational level in the participating hospitals. More research is needed connecting stakeholder involvement with hospital governance.

  16. Software Framework for Advanced Power Plant Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Widmann; Sorin Munteanu; Aseem Jain

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less

  17. Two-dimensional time-dependent modelling of fume formation in a pulsed gas metal arc welding process

    NASA Astrophysics Data System (ADS)

    Boselli, M.; Colombo, V.; Ghedini, E.; Gherardi, M.; Sanibondi, P.

    2013-06-01

    Fume formation in a pulsed gas metal arc welding (GMAW) process is investigated by coupling a time-dependent axi-symmetric two-dimensional model, which takes into account both droplet detachment and production of metal vapour, with a model for fume formation and transport based on the method of moments for the solution of the aerosol general dynamic equation. We report simulative results of a pulsed process (peak current = 350 A, background current 30 A, period = 9 ms) for a 1 mm diameter iron wire, with Ar shielding gas. Results showed that metal vapour production occurs mainly at the wire tip, whereas fume formation is concentrated in the fringes of the arc in the spatial region close to the workpiece, where metal vapours are transported by convection. The proposed modelling approach allows time-dependent tracking of fumes also in plasma processes where temperature-time variations occur faster than nanoparticle transport from the nucleation region to the surrounding atmosphere, as is the case for most pulsed GMAW processes.

  18. How far can we go in hydrological modelling without any knowledge of runoff formation processes?

    NASA Astrophysics Data System (ADS)

    Ayzel, Georgy

    2016-04-01

    Hydrological modelling is a challenging scientific issue for the last 50 years and tend to be it further because of the highest level of runoff formation processes complexity at the different spatio-temporal scales. Enormous number of modelling-related papers have submitted to the top-ranked journals every year, but in this publication speed race authors have pay increasing attention to the models and data they use by itself rather than underlying watershed processes. Great community effort of the free and open-source models sharing with high availability of hydrometeorological data sources led to conceptual shifting paradigm of hydrological science to the technical-oriented direction. In the third-world countries this shifting is more clear by the reason of field studies absence and obligatory requirement of practical significance of the research supported by the government funds. As a result we get a state of hydrological modelling discipline closer to the aim of high Nash-Sutcliffe efficiency (NSE) achievement rather than watershed processes understanding. Both lumped physically-based land-surface model SWAP (Soil Water - Atmosphere - Plants) and SCE-UA (Shuffled Complex Evolution method developed at The University of Arizona) technique for robust model parameters search were used for the runoff modelling of 323 MOPEX watersheds. No one special data analysis and expert knowledge-based decisions were not performed. Median value of NSE is 0.652 and 90% of watersheds have efficiency bigger than 0.5. Thus without any information of particular features of each watershed satisfactory modelling results were obtained. To prove our conclusions we build cutting-edge conceptual rainfall-runoff model based on decision trees and adaptive boosting machine learning algorithms for the one small watershed in USA. No one special data analysis or feature engineering was not performed too. Obtained results demonstrate great model prediction power both for learning and testing periods (NSE > 0.95). The way we obtain our results is clear and direct: we used both open-source physically based and conceptual models coupled with open access data. However these results does not make a significant contribution to the hydrological cycle processes understanding. And not the hydrological modelling itself but the reason why and for what we do it is the most challenging issue for the future research.

  19. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  20. Computational modeling of soot nucleation

    NASA Astrophysics Data System (ADS)

    Chung, Seung-Hyun

    Recent studies indicate that soot is the second most significant driver of climate change---behind CO2, but ahead of methane---and increased levels of soot particles in the air are linked to health hazards such as heart disease and lung cancer. Within the soot formation process, soot nucleation is the least understood step, and current experimental findings are still limited. This thesis presents computational modeling studies of the major pathways of the soot nucleation process. In this study, two regimes of soot nucleation---chemical growth and physical agglomeration---were evaluated and the results demonstrated that combustion conditions determine the relative importance of these two routes. Also, the dimerization process of polycyclic aromatic hydrocarbons, which has been regarded as one of the most important physical agglomeration processes in soot formation, was carefully examined with a new method for obtaining the nucleation rate using molecular dynamics simulation. The results indicate that the role of pyrene dimerization, which is the commonly accepted model, is expected to be highly dependent on various flame temperature conditions and may not be a key step in the soot nucleation process. An additional pathway, coronene dimerization in this case, needed to be included to improve the match with experimental data. The results of this thesis provide insight on the soot nucleation process and can be utilized to improve current soot formation models.

  1. Modeling the Effects of Coolant Application in Friction Stir Processing on Material Microstructure Using 3D CFD Analysis

    NASA Astrophysics Data System (ADS)

    Aljoaba, Sharif; Dillon, Oscar; Khraisheh, Marwan; Jawahir, I. S.

    2012-07-01

    The ability to generate nano-sized grains is one of the advantages of friction stir processing (FSP). However, the high temperatures generated during the stirring process within the processing zone stimulate the grains to grow after recrystallization. Therefore, maintaining the small grains becomes a critical issue when using FSP. In the present reports, coolants are applied to the fixture and/or processed material in order to reduce the temperature and hence, grain growth. Most of the reported data in the literature concerning cooling techniques are experimental. We have seen no reports that attempt to predict these quantities when using coolants while the material is undergoing FSP. Therefore, there is need to develop a model that predicts the resulting grain size when using coolants, which is an important step toward designing the material microstructure. In this study, two three-dimensional computational fluid dynamics (CFD) models are reported which simulate FSP with and without coolant application while using the STAR CCM+ CFD commercial software. In the model with the coolant application, the fixture (backing plate) is modeled while is not in the other model. User-defined subroutines were incorporated in the software and implemented to investigate the effects of changing process parameters on temperature, strain rate and material velocity fields in, and around, the processed nugget. In addition, a correlation between these parameters and the Zener-Holloman parameter used in material science was developed to predict the grain size distribution. Different stirring conditions were incorporated in this study to investigate their effects on material flow and microstructural modification. A comparison of the results obtained by using each of the models on the processed microstructure is also presented for the case of Mg AZ31B-O alloy. The predicted results are also compared with the available experimental data and generally show good agreement.

  2. The Effects of Science Models on Students' Understanding of Scientific Processes

    NASA Astrophysics Data System (ADS)

    Berglin, Riki Susan

    This action research study investigated how the use of science models affected fifth-grade students' ability to transfer their science curriculum to a deeper understanding of scientific processes. This study implemented a variety of science models into a chemistry unit throughout a 6-week study. The research question addressed was: In what ways do using models to learn and teach science help students transfer classroom knowledge to a deeper understanding of the scientific processes? Qualitative and quantitative data were collected through pre- and post-science interest inventories, observations field notes, student work samples, focus group interviews, and chemistry unit tests. These data collection tools assessed students' attitudes, engagement, and content knowledge throughout their chemistry unit. The results of the data indicate that the model-based instruction program helped with students' engagement in the lessons and understanding of chemistry content. The results also showed that students displayed positive attitudes toward using science models.

  3. How much expert knowledge is it worth to put in conceptual hydrological models?

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Zappa, Massimiliano

    2017-04-01

    Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.

  4. Simulation of the Onset of the Southeast Asian Monsoon During 1997 and 1998: The Impact of Surface Processes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Lau, W.; Baker, R.

    2004-01-01

    The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.

  5. Simulation of the Onset of the Southeast Asian Monsoon during 1997 and 1998: The Impact of Surface Processes

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Wang, Y.; Lau, W.; Baker, R. D.

    2004-01-01

    The onset of the southeast Asian monsoon during 1997 and 1998 was simulated with a coupled mesoscale atmospheric model (MM5) and a detailed land surface model. The rainfall results from the simulations were compared with observed satellite data from the TRMM (Tropical Rainfall Measuring Mission) TMI (TRMM Microwave Imager) and GPCP (Global Precipitation Climatology Project). The simulation with the land surface model captured basic signatures of the monsoon onset processes and associated rainfall statistics. The sensitivity tests indicated that land surface processes had a greater impact on the simulated rainfall results than that of a small sea surface temperature change during the onset period. In both the 1997 and 1998 cases, the simulations were significantly improved by including the land surface processes. The results indicated that land surface processes played an important role in modifying the low-level wind field over two major branches of the circulation; the southwest low-level flow over the Indo-China peninsula and the northern cold front intrusion from southern China. The surface sensible and latent heat exchange between the land and atmosphere modified the low-level temperature distribution and gradient, and therefore the low-level. The more realistic forcing of the sensible and latent heat from the detailed land surface model improved the monsoon rainfall and associated wind simulation. The model results will be compared to the simulation of the 6-7 May 2000 Missouri flash flood event. In addition, the impact of model initialization and land surface treatment on timing, intensity, and location of extreme precipitation will be examined.

  6. Petri net based model of the body iron homeostasis.

    PubMed

    Formanowicz, Dorota; Sackmann, Andrea; Formanowicz, Piotr; Błazewicz, Jacek

    2007-10-01

    The body iron homeostasis is a not fully understood complex process. Despite the fact that some components of this process have been described in the literature, the complete model of the whole process has not been proposed. In this paper a Petri net based model of the body iron homeostasis is presented. Recently, Petri nets have been used for describing and analyzing various biological processes since they allow modeling the system under consideration very precisely. The main result presented in the paper is twofold, i.e., an informal description of the main part of the whole iron homeostasis process is described, and then it is also formulated in the formal language of Petri net theory. This model allows for a possible simulation of the process, since Petri net theory provides a lot of established analysis techniques.

  7. Fixation of strategies with the Moran and Fermi processes in evolutionary games

    NASA Astrophysics Data System (ADS)

    Liu, Xuesong; He, Mingfeng; Kang, Yibin; Pan, Qiuhui

    2017-10-01

    A model of stochastic evolutionary game dynamics with finite population was built. It combines the standard Moran and Fermi rules with two strategies cooperation and defection. We obtain the expressions of fixation probabilities and fixation times. The one-third rule which has been found in the frequency dependent Moran process also holds for our model. We obtain the conditions of strategy being an evolutionarily stable strategy in our model, and then make a comparison with the standard Moran process. Besides, the analytical results show that compared with the standard Moran process, fixation occurs with higher probabilities under a prisoner's dilemma game and coordination game, but with lower probabilities under a coexistence game. The simulation result shows that the fixation time in our mixed process is lower than that in the standard Fermi process. In comparison with the standard Moran process, fixation always takes more time on average in spatial populations, regardless of the game. In addition, the fixation time decreases with the growth of the number of neighbors.

  8. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  9. First-principles modeling of laser-matter interaction and plasma dynamics in nanosecond pulsed laser shock processing

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongyang; Nian, Qiong; Doumanidis, Charalabos C.; Liao, Yiliang

    2018-02-01

    Nanosecond pulsed laser shock processing (LSP) techniques, including laser shock peening, laser peen forming, and laser shock imprinting, have been employed for widespread industrial applications. In these processes, the main beneficial characteristic is the laser-induced shockwave with a high pressure (in the order of GPa), which leads to the plastic deformation with an ultrahigh strain rate (105-106/s) on the surface of target materials. Although LSP processes have been extensively studied by experiments, few efforts have been put on elucidating underlying process mechanisms through developing a physics-based process model. In particular, development of a first-principles model is critical for process optimization and novel process design. This work aims at introducing such a theoretical model for a fundamental understanding of process mechanisms in LSP. Emphasis is placed on the laser-matter interaction and plasma dynamics. This model is found to offer capabilities in predicting key parameters including electron and ion temperatures, plasma state variables (temperature, density, and pressure), and the propagation of the laser shockwave. The modeling results were validated by experimental data.

  10. Adolescents' Use of Self-Regulatory Processes and Their Relation to Qualitative Mental Model Shifts while Using Hypermedia

    ERIC Educational Resources Information Center

    Greene, Jeffrey Alan; Azevedo, Roger

    2007-01-01

    This study examined 148 adolescents' use of self-regulated learning (SRL) processes when learning about the circulatory system using hypermedia. We examined participants' verbal protocols to determine the relationship between SRL processes and qualitative shifts in students' mental models from pretest to posttest. Results indicated that…

  11. New figuring model based on surface slope profile for grazing-incidence reflective optics

    DOE PAGES

    Zhou, Lin; Huang, Lei; Bouet, Nathalie; ...

    2016-08-09

    Surface slope profile is widely used in the metrology of grazing-incidence reflective optics instead of surface height profile. Nevertheless, the theoretical and experimental model currently used in deterministic optical figuring processes is based on surface height, not on surface slope. This means that the raw slope profile data from metrology need to be converted to height profile to perform the current height-based figuring processes. The inevitable measurement noise in the raw slope data will introduce significant cumulative error in the resultant height profiles. As a consequence, this conversion will degrade the determinism of the figuring processes, and will have anmore » impact on the ultimate surface figuring results. To overcome this problem, an innovative figuring model is proposed, which directly uses the raw slope profile data instead of the usual height data as input for the deterministic process. In this article, first the influence of the measurement noise on the resultant height profile is analyzed, and then a new model is presented; finally a demonstration experiment is carried out using a one-dimensional ion beam figuring process to demonstrate the validity of our approach.« less

  12. On the Coupling Between the Incus and the Stapes in the Cat

    PubMed Central

    Heng Siah, T.; McKee, Marc D.; Daniel, Sam J.; Decraemer, Willem F.

    2005-01-01

    The connection between the long process and the lenticular process of the incus is extremely fine, so much so that some authors have treated the lenticular process as a separate bone. We review descriptions of the lenticular process that have appeared in the literature, and present some new histological observations. We discuss the dimensions and composition of the lenticular process and of the incudostapedial joint, and present estimates of the material properties for the bone, cartilage, and ligament of which they are composed. We present a preliminary finite-element model which includes the lenticular plate, the bony pedicle connecting the lenticular plate to the long process, the head of the stapes, and the incudostapedial joint. The model has a much simplified geometry. We present simulation results for ranges of values for the material properties. We then present simulation results for this model when it is incorporated into an overall model of the middle ear of the cat. For the geometries and material properties used here, the bony pedicle is found to contribute significant flexibility to the coupling between the incus and the stapes. PMID:15735938

  13. Numerical model of thermo-mechanical coupling for the tensile failure process of brittle materials

    NASA Astrophysics Data System (ADS)

    Fu, Yu; Wang, Zhe; Ren, Fengyu; Wang, Daguo

    2017-10-01

    A numerical model of thermal cracking with a thermo-mechanical coupling effect was established. The theory of tensile failure and heat conduction is used to study the tensile failure process of brittle materials, such as rock and concrete under high temperature environment. The validity of the model is verified by thick-wall cylinders with analytical solutions. The failure modes of brittle materials under thermal stresses caused by temperature gradient and different thermal expansion coefficient were studied by using a thick-wall cylinder model and an embedded particle model, respectively. In the thick-wall cylinder model, different forms of cracks induced by temperature gradient were obtained under different temperature boundary conditions. In the embedded particle model, radial cracks were produced in the medium part with lower tensile strength when temperature increased because of the different thermal expansion coefficient. Model results are in good agreement with the experimental results, thereby providing a new finite element method for analyzing the thermal damage process and mechanism of brittle materials.

  14. Modelling, simulation and verification of the screening process of a swing-bar sieve based on the DEM

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Yu, Jianqun; Yu, Yajun

    2018-05-01

    To solve the problems in the DEM simulations of the screening process of a swing-bar sieve, in this paper we propose the real-virtual boundary method to build the geometrical model of the screen deck on a swing-bar sieve. The motion of the swing-bar sieve is modelled by the planer multi-body kinematics. A coupled model of the discrete element method (DEM) with multi-body kinematics (MBK) is presented to simulate the flowing and passing processes of soybean particles on the screen deck. By the comparison of the simulated results with the experimental results of the screening process of the LA-LK laboratory scale swing-bar sieve, the feasibility and validity of the real-virtual boundary method and the coupled DEM-MBK model we proposed in this paper can be verified. This work provides the basis for the optimization design of the swing-bar sieve with circular apertures and complex motion.

  15. Modeling nuclear processes by Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox softwaremore » that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.« less

  16. Identification of Biokinetic Models Using the Concept of Extents.

    PubMed

    Mašić, Alma; Srinivasan, Sriniketh; Billeter, Julien; Bonvin, Dominique; Villez, Kris

    2017-07-05

    The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.

  17. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  18. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    NASA Astrophysics Data System (ADS)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  19. Stochastic Modelling, Analysis, and Simulations of the Solar Cycle Dynamic Process

    NASA Astrophysics Data System (ADS)

    Turner, Douglas C.; Ladde, Gangaram S.

    2018-03-01

    Analytical solutions, discretization schemes and simulation results are presented for the time delay deterministic differential equation model of the solar dynamo presented by Wilmot-Smith et al. In addition, this model is extended under stochastic Gaussian white noise parametric fluctuations. The introduction of stochastic fluctuations incorporates variables affecting the dynamo process in the solar interior, estimation error of parameters, and uncertainty of the α-effect mechanism. Simulation results are presented and analyzed to exhibit the effects of stochastic parametric volatility-dependent perturbations. The results generalize and extend the work of Hazra et al. In fact, some of these results exhibit the oscillatory dynamic behavior generated by the stochastic parametric additative perturbations in the absence of time delay. In addition, the simulation results of the modified stochastic models influence the change in behavior of the very recently developed stochastic model of Hazra et al.

  20. Operation, Modeling and Analysis of the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Whitlow, Jonathan E.

    2001-01-01

    The Reverse Water Gas Shift process is a candidate technology for water and oxygen production on Mars under the In-Situ Propellant Production project. This report focuses on the operation and analysis of the Reverse Water Gas Shift (RWGS) process, which has been constructed at Kennedy Space Center. A summary of results from the initial operation of the RWGS, process along with an analysis of these results is included in this report. In addition an evaluation of a material balance model developed from the work performed previously under the summer program is included along with recommendations for further experimental work.

  1. A computational approach to climate science education with CLIMLAB

    NASA Astrophysics Data System (ADS)

    Rose, B. E. J.

    2017-12-01

    CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format

  2. Architectural Improvements and New Processing Tools for the Open XAL Online Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M

    The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phasemore » information, which allows for dynamic phase slip and elapsed time computation.« less

  3. Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven; Lowe, Jonathan; Woodburn, James

    2015-01-01

    Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.

  4. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  5. Model based adaptive control of a continuous capture process for monoclonal antibodies production.

    PubMed

    Steinebach, Fabian; Angarita, Monica; Karst, Daniel J; Müller-Späth, Thomas; Morbidelli, Massimo

    2016-04-29

    A two-column capture process for continuous processing of cell-culture supernatant is presented. Similar to other multicolumn processes, this process uses sequential countercurrent loading of the target compound in order maximize resin utilization and productivity for a given product yield. The process was designed using a novel mechanistic model for affinity capture, which takes both specific adsorption as well as transport through the resin beads into account. Simulations as well as experimental results for the capture of an IgG antibody are discussed. The model was able to predict the process performance in terms of yield, productivity and capacity utilization. Compared to continuous capture with two columns operated batch wise in parallel, a 2.5-fold higher capacity utilization was obtained for the same productivity and yield. This results in an equal improvement in product concentration and reduction of buffer consumption. The developed model was used not only for the process design and optimization but also for its online control. In particular, the unit operating conditions are changed in order to maintain high product yield while optimizing the process performance in terms of capacity utilization and buffer consumption also in the presence of changing upstream conditions and resin aging. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Computational modeling of residual stress formation during the electron beam melting process for Inconel 718

    DOE PAGES

    Prabhakar, P.; Sames, William J.; Dehoff, Ryan R.; ...

    2015-03-28

    Here, a computational modeling approach to simulate residual stress formation during the electron beam melting (EBM) process within the additive manufacturing (AM) technologies for Inconel 718 is presented in this paper. The EBM process has demonstrated a high potential to fabricate components with complex geometries, but the resulting components are influenced by the thermal cycles observed during the manufacturing process. When processing nickel based superalloys, very high temperatures (approx. 1000 °C) are observed in the powder bed, base plate, and build. These high temperatures, when combined with substrate adherence, can result in warping of the base plate and affect themore » final component by causing defects. It is important to have an understanding of the thermo-mechanical response of the entire system, that is, its mechanical behavior towards thermal loading occurring during the EBM process prior to manufacturing a component. Therefore, computational models to predict the response of the system during the EBM process will aid in eliminating the undesired process conditions, a priori, in order to fabricate the optimum component. Such a comprehensive computational modeling approach is demonstrated to analyze warping of the base plate, stress and plastic strain accumulation within the material, and thermal cycles in the system during different stages of the EBM process.« less

  7. Leveraging Open Standard Interfaces in Accessing and Processing NASA Data Model Outputs

    NASA Astrophysics Data System (ADS)

    Falke, S. R.; Alameh, N. S.; Hoijarvi, K.; de La Beaujardiere, J.; Bambacus, M. J.

    2006-12-01

    An objective of NASA's Earth Science Division is to develop advanced information technologies for processing, archiving, accessing, visualizing, and communicating Earth Science data. To this end, NASA and other federal agencies have collaborated with the Open Geospatial Consortium (OGC) to research, develop, and test interoperability specifications within projects and testbeds benefiting the government, industry, and the public. This paper summarizes the results of a recent effort under the auspices of the OGC Web Services testbed phase 4 (OWS-4) to explore standardization approaches for accessing and processing the outputs of NASA models of physical phenomena. Within the OWS-4 context, experiments were designed to leverage the emerging OGC Web Processing Service (WPS) and Web Coverage Service (WCS) specifications to access, filter and manipulate the outputs of the NASA Goddard Earth Observing System (GEOS) and Goddard Chemistry Aerosol Radiation and Transport (GOCART) forecast models. In OWS-4, the intent is to provide the users with more control over the subsets of data that they can extract from the model results as well as over the final portrayal of that data. To meet that goal, experiments have been designed to test the suitability of use of OGC's Web Processing Service (WPS) and Web Coverage Service (WCS) for filtering, processing and portraying the model results (including slices by height or by time), and to identify any enhancements to the specs to meet the desired objectives. This paper summarizes the findings of the experiments highlighting the value of the Web Processing Service in providing standard interfaces for accessing and manipulating model data within spatial and temporal frameworks. The paper also points out the key shortcomings of the WPS especially in terms in comparison with a SOAP/WSDL approach towards solving the same problem.

  8. Neural Network Modeling for Gallium Arsenide IC Fabrication Process and Device Characteristics.

    NASA Astrophysics Data System (ADS)

    Creech, Gregory Lee, I.

    This dissertation presents research focused on the utilization of neurocomputing technology to achieve enhanced yield and effective yield prediction in integrated circuit (IC) manufacturing. Artificial neural networks are employed to model complex relationships between material and device characteristics at critical stages of the semiconductor fabrication process. Whole wafer testing was performed on the starting substrate material and during wafer processing at four critical steps: Ohmic or Post-Contact, Post-Recess, Post-Gate and Final, i.e., at completion of fabrication. Measurements taken and subsequently used in modeling include, among others, doping concentrations, layer thicknesses, planar geometries, layer-to-layer alignments, resistivities, device voltages, and currents. The neural network architecture used in this research is the multilayer perceptron neural network (MLPNN). The MLPNN is trained in the supervised mode using the generalized delta learning rule. It has one hidden layer and uses continuous perceptrons. The research focuses on a number of different aspects. First is the development of inter-process stage models. Intermediate process stage models are created in a progressive fashion. Measurements of material and process/device characteristics taken at a specific processing stage and any previous stages are used as input to the model of the next processing stage characteristics. As the wafer moves through the fabrication process, measurements taken at all previous processing stages are used as input to each subsequent process stage model. Secondly, the development of neural network models for the estimation of IC parametric yield is demonstrated. Measurements of material and/or device characteristics taken at earlier fabrication stages are used to develop models of the final DC parameters. These characteristics are computed with the developed models and compared to acceptance windows to estimate the parametric yield. A sensitivity analysis is performed on the models developed during this yield estimation effort. This is accomplished by analyzing the total disturbance of network outputs due to perturbed inputs. When an input characteristic bears no, or little, statistical or deterministic relationship to the output characteristics, it can be removed as an input. Finally, neural network models are developed in the inverse direction. Characteristics measured after the final processing step are used as the input to model critical in-process characteristics. The modeled characteristics are used for whole wafer mapping and its statistical characterization. It is shown that this characterization can be accomplished with minimal in-process testing. The concepts and methodologies used in the development of the neural network models are presented. The modeling results are provided and compared to the actual measured values of each characteristic. An in-depth discussion of these results and ideas for future research are presented.

  9. Implementation of the nursing process in a health area: models and assessment structures used

    PubMed Central

    Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Urkia-Etxabe, Jose Maria; Asurabarrena-Iraola, Carmen

    2014-01-01

    OBJECTIVE: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). METHOD: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. RESULTS: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. CONCLUSION: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures. PMID:25493672

  10. Figure-ground organization and object recognition processes: an interactive account.

    PubMed

    Vecera, S P; O'Reilly, R C

    1998-04-01

    Traditional bottom-up models of visual processing assume that figure-ground organization precedes object recognition. This assumption seems logically necessary: How can object recognition occur before a region is labeled as figure? However, some behavioral studies find that familiar regions are more likely to be labeled figure than less familiar regions, a problematic finding for bottom-up models. An interactive account is proposed in which figure-ground processes receive top-down input from object representations in a hierarchical system. A graded, interactive computational model is presented that accounts for behavioral results in which familiarity effects are found. The interactive model offers an alternative conception of visual processing to bottom-up models.

  11. Modeling process of embolization arteriovenous malformation on the basis of two-phase filtration model

    NASA Astrophysics Data System (ADS)

    Cherevko, A. A.; Gologush, T. S.; Ostapenko, V. V.; Petrenko, I. A.; Chupakhin, A. P.

    2016-06-01

    Arteriovenous malformation is a chaotic disordered interlacement of very small diameter vessels, performing reset of blood from the artery into the vein. In this regard it can be adequately modeled using porous medium. In this model process of embolization described as penetration of non-adhesive substance ONYX into the porous medium, filled with blood, both of these fluids are not mixed with each other. In one-dimensional approximation such processes are well described by Buckley-Leverett equation. In this paper Buckley-Leverett equation is solved numerically by using a new modification of Cabaret scheme. The results of numerical modeling process of embolization of AVM are shown.

  12. Revisiting low-fidelity two-fluid models for gas-solids transport

    NASA Astrophysics Data System (ADS)

    Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus

    2016-08-01

    Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.

  13. The effects of global awareness on the spreading of epidemics in multiplex networks

    NASA Astrophysics Data System (ADS)

    Zang, Haijuan

    2018-02-01

    It is increasingly recognized that understanding the complex interplay patterns between epidemic spreading and human behavioral is a key component of successful infection control efforts. In particular, individuals can obtain the information about epidemics and respond by altering their behaviors, which can affect the spreading dynamics as well. Besides, because the existence of herd-like behaviors, individuals are very easy to be influenced by the global awareness information. Here, in this paper, we propose a global awareness controlled spreading model (GACS) to explore the interplay between the coupled dynamical processes. Using the global microscopic Markov chain approach, we obtain the analytical results for the epidemic thresholds, which shows a high accuracy by comparison with lots of Monte Carlo simulations. Furthermore, considering other classical models used to describe the coupled dynamical processes, including the local awareness controlled contagion spreading (LACS) model, Susceptible-Infected-Susceptible-Unaware-Aware-Unaware (SIS-UAU) model and the single layer occasion, we make a detailed comparisons between the GACS with them. Although the comparisons and results depend on the parameters each model has, the GACS model always shows a strong restrain effects on epidemic spreading process. Our results give us a better understanding of the coupled dynamical processes and highlights the importance of considering the spreading of global awareness in the control of epidemics.

  14. A two-phase flow model for submarine granular flows: With an application to collapse of deeply-submerged granular columns

    NASA Astrophysics Data System (ADS)

    Lee, Cheng-Hsien; Huang, Zhenhua

    2018-05-01

    The collapse process of a submerged granular column is strongly affected by its initial packing. Previous models for particle response time, which is used to quantify the drag force between the solid and liquid phases in rheology-based two-phase flow models, have difficulty in simulating the collapse process of granular columns with different initial concentrations (initial packing conditions). This study introduces a new model for particle response time, which enables us to satisfactorily model the drag force between the two phases for a wide range of volume concentration. The present model can give satisfactory results for both loose and dense packing conditions. The numerical results have shown that (i) the initial packing affects the occurrence of contractancy/diltancy behavior during the collapse process, (ii) the general buoyancy and drag force are strongly affected by the initial packing through contractancy and diltancy, and (iii) the general buoyancy and drag force can destabilize the granular material in loose packing condition but stabilize the granular material in dense packing condition. The results have shown that the collapse process of a densely-packed granular column is more sensitive to particle response time than that of a loosely-packed granular column.

  15. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  16. Multifactorial modelling of high-temperature treatment of timber in the saturated water steam medium

    NASA Astrophysics Data System (ADS)

    Prosvirnikov, D. B.; Safin, R. G.; Ziatdinova, D. F.; Timerbaev, N. F.; Lashkov, V. A.

    2016-04-01

    The paper analyses experimental data obtained in studies of high-temperature treatment of softwood and hardwood in an environment of saturated water steam. Data were processed in the Curve Expert software for the purpose of statistical modelling of processes and phenomena occurring during this process. The multifactorial modelling resulted in the empirical dependences, allowing determining the main parameters of this type of hydrothermal treatment with high accuracy.

  17. The Art and Science of Climate Model Tuning

    DOE PAGES

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...

    2017-03-31

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  18. The Art and Science of Climate Model Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  19. Numerical and experimental studies on effects of moisture content on combustion characteristics of simulated municipal solid wastes in a fixed bed.

    PubMed

    Sun, Rui; Ismail, Tamer M; Ren, Xiaohan; Abd El-Salam, M

    2015-05-01

    In order to reveal the features of the combustion process in the porous bed of a waste incinerator, a two-dimensional unsteady state model and experimental study were employed to investigate the combustion process in a fixed bed of municipal solid waste (MSW) on the combustion process in a fixed bed reactor. Conservation equations of the waste bed were implemented to describe the incineration process. The gas phase turbulence was modeled using the k-ε turbulent model and the particle phase was modeled using the kinetic theory of granular flow. The rate of moisture evaporation, devolatilization rate, and char burnout was calculated according to the waste property characters. The simulation results were then compared with experimental data for different moisture content of MSW, which shows that the incineration process of waste in the fixed bed is reasonably simulated. The simulation results of solid temperature, gas species and process rate in the bed are accordant with experimental data. Due to the high moisture content of fuel, moisture evaporation consumes a vast amount of heat, and the evaporation takes up most of the combustion time (about 2/3 of the whole combustion process). The whole bed combustion process reduces greatly as MSW moisture content increases. The experimental and simulation results provide direction for design and optimization of the fixed bed of MSW. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Modeling the filament winding process

    NASA Technical Reports Server (NTRS)

    Calius, E. P.; Springer, G. S.

    1985-01-01

    A model is presented which can be used to determine the appropriate values of the process variables for filament winding a cylinder. The model provides the cylinder temperature, viscosity, degree of cure, fiber position and fiber tension as functions of position and time during the filament winding and subsequent cure, and the residual stresses and strains within the cylinder during and after the cure. A computer code was developed to obtain quantitative results. Sample results are given which illustrate the information that can be generated with this code.

  1. The Cognitive-Miser Response Model: Testing for Intuitive and Deliberate Reasoning

    ERIC Educational Resources Information Center

    Bockenholt, Ulf

    2012-01-01

    In a number of psychological studies, answers to reasoning vignettes have been shown to result from both intuitive and deliberate response processes. This paper utilizes a psychometric model to separate these two response tendencies. An experimental application shows that the proposed model facilitates the analysis of dual-process item responses…

  2. Development of dynamic Bayesian models for web application test management

    NASA Astrophysics Data System (ADS)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  3. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  4. Numerical Simulation and Optimization of Directional Solidification Process of Single Crystal Superalloy Casting

    PubMed Central

    Zhang, Hang; Xu, Qingyan; Liu, Baicheng

    2014-01-01

    The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535

  5. Mathematical Modeling of Thermofrictional Milling Process Using ANSYS WB Software

    NASA Astrophysics Data System (ADS)

    Sherov, K. T.; Sikhimbayev, M. R.; Sherov, A. K.; Donenbayev, B. S.; Rakishev, A. K.; Mazdubai, A. B.; Musayev, M. M.; Abeuova, A. M.

    2017-06-01

    This article presents ANSYS WB-based mathematical modelling of the thermofrictional milling process, which allowed studying the dynamics of thermal and physical processes occurring during the processing. The technique used also allows determination of the optimal cutting conditions of thermofrictional milling for processing various materials, in particular steel 40CN2MA, 30CGSA, 45, 3sp. In our study, from among a number of existing models of cutting fracture, we chose the criterion first proposed by prof. V. L. Kolmogorov. In order to increase the calculations performance, a mathematical model was proposed, that used only two objects: a parallelepiped-shaped workpiece and a cutting insert in the form of a pentagonal prism. In addition, the work takes into account the friction coefficient between a cutting insert and a workpiece taken equal to 0.4 mm. To determine the temperature in the subcontact layer of the workpiece, we introduced the coordinates of nine characteristic points with the same interval in the local coordinate system. As a result, the temperature values were obtained for different materials at the studied points during the cutter speed change. The research results showed the possibility of controlling thermal processes during processing by choosing the optimum cutting modes.

  6. Hydrological and water quality processes simulation by the integrated MOHID model

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Eduardo, Jauch; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Different modelling approaches have been used in recent decades to study the water quality degradation caused by non-point source pollution. In this study, the MOHID fully distributed and physics-based model has been employed to simulate hydrological processes and nitrogen dynamics in a nitrate vulnerable zone: the Alegria River watershed (Basque Country, Northern Spain). The results of this study indicate that the MOHID code is suitable for hydrological processes simulation at the watershed scale, as the model shows satisfactory performance at simulating the discharge (with NSE: 0.74 and 0.76 during calibration and validation periods, respectively). The agronomical component of the code, allowed the simulation of agricultural practices, which lead to adequate crop yield simulation in the model. Furthermore, the nitrogen exportation also shows satisfactory performance (with NSE: 0.64 and 0.69 during calibration and validation periods, respectively). While the lack of field measurements do not allow to evaluate the nutrient cycling processes in depth, it has been observed that the MOHID model simulates the annual denitrification according to general ranges established for agricultural watersheds (in this study, 9 kg N ha-1 year-1). In addition, the model has simulated coherently the spatial distribution of the denitrification process, which is directly linked to the simulated hydrological conditions. Thus, the model has localized the highest rates nearby the discharge zone of the aquifer and also where the aquifer thickness is low. These results evidence the strength of this model to simulate watershed scale hydrological processes as well as the crop production and the agricultural activity derived water quality degradation (considering both nutrient exportation and nutrient cycling processes).

  7. Thermo-hydroforming of a fiber-reinforced thermoplastic composites considering fiber orientations

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunchul; Kuuttila, Nicholas Eric; Pourboghrat, Farhang

    2018-05-01

    The Thermoplastic woven composites were formed using a composite thermal hydroforming process, utilizing heated and pressurized fluid, similar to sheet metal forming. This study focuses on the modification of 300-ton pressure formation and predicts its behavior. Spectra Shield SR-3136 is used in this study and material properties are measured by experiments. The behavior of fiber-reinforced thermoplastic polymer composites (FRTP) was modeled using the Preferred Fiber Orientation (PFO) model and validated by comparing numerical analysis with experimental results. The thermo-hydroforming process has shown good results in the ability to form deep drawn parts with reduced wrinkles. Numerical analysis was performed using the PFO model and implemented as commercial finite element software ABAQUS / Explicit. The user subroutine (VUMAT) was used for the material properties of the thermoplastic composite layer. This model is suitable for working with multiple layers of composite laminates. Model parameters have been updated to work with cohesive zone model to calculate the interfacial properties between each composite layer. The results of the numerical modeling showed a good correlation with the molding experiment on the forming shape. Numerical results were also compared with experimental results on punch force-displacement curves for deformed geometry and forming processes of the composite layer. Overall, the shape of the deformed FRTP, including the distribution of wrinkles, was accurately predicted as shown in this study.

  8. Design, Control and in Situ Visualization of Gas Nitriding Processes

    PubMed Central

    Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy

    2010-01-01

    The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536

  9. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project.

    PubMed

    Cutting, Elizabeth M; Overby, Casey L; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R; Beitelshees, Amber L

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease.

  10. Using Workflow Modeling to Identify Areas to Improve Genetic Test Processes in the University of Maryland Translational Pharmacogenomics Project

    PubMed Central

    Cutting, Elizabeth M.; Overby, Casey L.; Banchero, Meghan; Pollin, Toni; Kelemen, Mark; Shuldiner, Alan R.; Beitelshees, Amber L.

    2015-01-01

    Delivering genetic test results to clinicians is a complex process. It involves many actors and multiple steps, requiring all of these to work together in order to create an optimal course of treatment for the patient. We used information gained from focus groups in order to illustrate the current process of delivering genetic test results to clinicians. We propose a business process model and notation (BPMN) representation of this process for a Translational Pharmacogenomics Project being implemented at the University of Maryland Medical Center, so that personalized medicine program implementers can identify areas to improve genetic testing processes. We found that the current process could be improved to reduce input errors, better inform and notify clinicians about the implications of certain genetic tests, and make results more easily understood. We demonstrate our use of BPMN to improve this important clinical process for CYP2C19 genetic testing in patients undergoing invasive treatment of coronary heart disease. PMID:26958179

  11. Developing a framework for transferring knowledge into action: a thematic analysis of the literature

    PubMed Central

    Ward, Vicky; House, Allan; Hamer, Susan

    2010-01-01

    Objectives Although there is widespread agreement about the importance of transferring knowledge into action, we still lack high quality information about what works, in which settings and with whom. Whilst there are a large number of models and theories for knowledge transfer interventions, they are untested meaning that their applicability and relevance is largely unknown. This paper describes the development of a conceptual framework of translating knowledge into action and discusses how it can be used for developing a useful model of the knowledge transfer process. Methods A narrative review of the knowledge transfer literature identified 28 different models which explained all or part of the knowledge transfer process. The models were subjected to a thematic analysis to identify individual components and the types of processes used when transferring knowledge into action. The results were used to build a conceptual framework of the process. Results Five common components of the knowledge transfer process were identified: problem identification and communication; knowledge/research development and selection; analysis of context; knowledge transfer activities or interventions; and knowledge/research utilization. We also identified three types of knowledge transfer processes: a linear process; a cyclical process; and a dynamic multidirectional process. From these results a conceptual framework of knowledge transfer was developed. The framework illustrates the five common components of the knowledge transfer process and shows that they are connected via a complex, multidirectional set of interactions. As such the framework allows for the individual components to occur simultaneously or in any given order and to occur more than once during the knowledge transfer process. Conclusion Our framework provides a foundation for gathering evidence from case studies of knowledge transfer interventions. We propose that future empirical work is designed to test and refine the relevant importance and applicability of each of the components in order to build more useful models of knowledge transfer which can serve as a practical checklist for planning or evaluating knowledge transfer activities. PMID:19541874

  12. Basin infilling of a schematic 1D estuary using two different approaches: an aggregate diffusive type model and a processed based model.

    NASA Astrophysics Data System (ADS)

    Laginha Silva, Patricia; Martins, Flávio A.; Boski, Tomász; Sampath, Dissanayake M. R.

    2010-05-01

    Fluvial sediment transport creates great challenges for river scientists and engineers. The interaction between the fluid (water) and the solid (dispersed sediment particles) phases is crucial in morphodynamics. The process of sediment transport and the resulting morphological evolution of rivers get more complex with the exposure of the fluvial systems to the natural and variable environment (climatic, geological, ecological and social, etc.). The earlier efforts in mathematical river modelling were almost exclusively built on traditional fluvial hydraulics. The last half century has seen more and more developments and applications of mathematical models for fluvial flow, sediment transport and morphological evolution. The first attempts for a quantitative description and simulation of basin filling in geological time scales started in the late 60´s of the last century (eg. Schwarzacher, 1966; Briggs & Pollack, 1967). However, the quality of this modelling practice has emerged as a crucial issue for concern, which is widely viewed as the key that could unlock the full potential of computational fluvial hydraulics. Most of the models presently used to study fluvial basin filling are of the "diffusion type" (Flemmings and Jordan, 1989). It must be noted that this type of models do not assume that the sediment transport is performed by a physical diffusive process. Rather they are synthetic models based on mass conservation. In the "synthesist" viewpoint (Tipper, 1992; Goldenfeld & Kadanoff, 1999; Werner, 1999 in Paola, 2000) the dynamics of complex systems may occur on many levels (time or space scales) and the dynamics of higher levels may be more or less independent of that at lower levels. In this type of models the low frequency dynamics is controlled by only a few important processes and the high frequency processes are not included. In opposition to this is the "reductionist" viewpoint that states that there is no objective reason to discard high frequency processes. In this viewpoint the system is broken down into its fundamental components and processes and the model is build up by selecting the important processes regardless of its time and space scale. This viewpoint was only possible to pursue in the recent years due to improvement in system knowledge and computer power (Paola, 2000). The primary aim of this paper is to demonstrate that it is possible to simulate the evolution of the sediment river bed, traditionally studied with synthetic models, with a process-based hydrodynamic, sediment transport and morphodynamic model, solving explicitly the mass and momentum conservation equations. With this objective, a comparison between two mathematical models for alluvial rivers is made to simulate the evolution of the sediment river bed of a conceptual 1D embayment for periods in the order of a thousand years: the traditional synthetic basin infilling aggregate diffusive type model based on the diffusion equation (Paola, 2000), used in the "synthesist" viewpoint and the process-based model MOHID (Miranda et al., 2000). The simulation of the sediment river bed evolution achieved by the process-based model MOHID is very similar to those obtained by the diffusive type model, but more complete due to the complexity of the process-based model. In the MOHID results it is possible to observe a more comprehensive and realistic results because this type of model include processes that is impossible to a synthetic model to describe. At last the combined effect of tide, sea level rise and river discharges was investigated in the process based model. These effects cannot be simulated using the diffusive type model. The results demonstrate the feasibility of using process based models to perform studies in scales of 10000 years. This is an advance relative to the use of synthetic models, enabling the use of variable forcing. REFERENCES • Briggs, L.I. and Pollack, H.N., 1967. Digital model of evaporate sedimentation. Science, 155, 453-456. • Flemmings, P.B. and Jordan, T.E., 1989. A synthetic stratigraphic model of foreland basin development. J. Geophys. Res., 94, 3851-3866. • Miranda, R., Braunschweig, F., Leitão, P., Neves, R., Martins, F. & Santos A., 2000. MOHID 2000 - A coastal integrated object oriented model. Proc. Hydraulic Engineering Software VIII, Lisbon, 2000, 393-401, Ed. W.R. Blain & C.A. Brebbia, WITpress. • Paola, C., 2000. Quantitative models of sedimentary basin filing. Sedimentology, 47, 121-178. • Schwarzacher, W., 1966. Sedimentation in a subsiding basin. Nature, 5043, 1349-1350. ACKNOWLEDGMENTS This work was supported by the EVEDUS PTDC/CLI/68488/2006 Research Project

  13. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition, after investigating various methods, a Smoothed Particle Hydrodynamics Model (SPH Model) was developed to model wire feeding process. Its computational efficiency and simple architecture makes it more robust and flexible than other models. More research on material properties may be needed to realistically model the AAM processes. A microscale model was developed to investigate heterogeneous nucleation, dendritic grain growth, epitaxial growth of columnar grains, columnar-to-equiaxed transition, grain transport in melt, and other properties. The orientations of the columnar grains were almost perpendicular to the laser motion's direction. Compared to the similar studies in the literature, the multiple grain morphology modeling result is in the same order of magnitude as optical morphologies in the experiment. Experimental work was conducted to validate different models. An infrared camera was incorporated as a process monitoring and validating tool to identify the solidus and mushy zones during deposition. The images were successfully processed to identify these regions. This research project has investigated multiscale and multiphysics of the complex AAM processes thus leading to advanced understanding of these processes. The project has also developed several modeling tools and experimental validation tools that will be very critical in the future of AAM process qualification and certification.

  14. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  15. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  16. Time delay and noise explaining the behaviour of the cell growth in fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayuobi, Tawfiqullah; Rosli, Norhayati; Bahar, Arifah

    2015-02-03

    This paper proposes to investigate the interplay between time delay and external noise in explaining the behaviour of the microbial growth in batch fermentation process. Time delay and noise are modelled jointly via stochastic delay differential equations (SDDEs). The typical behaviour of cell concentration in batch fermentation process under this model is investigated. Milstein scheme is applied for solving this model numerically. Simulation results illustrate the effects of time delay and external noise in explaining the lag and stationary phases, respectively for the cell growth of fermentation process.

  17. Time delay and noise explaining the behaviour of the cell growth in fermentation process

    NASA Astrophysics Data System (ADS)

    Ayuobi, Tawfiqullah; Rosli, Norhayati; Bahar, Arifah; Salleh, Madihah Md

    2015-02-01

    This paper proposes to investigate the interplay between time delay and external noise in explaining the behaviour of the microbial growth in batch fermentation process. Time delay and noise are modelled jointly via stochastic delay differential equations (SDDEs). The typical behaviour of cell concentration in batch fermentation process under this model is investigated. Milstein scheme is applied for solving this model numerically. Simulation results illustrate the effects of time delay and external noise in explaining the lag and stationary phases, respectively for the cell growth of fermentation process.

  18. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  19. Language and vertical space: on the automaticity of language action interconnections.

    PubMed

    Dudschig, Carolin; de la Vega, Irmgard; De Filippis, Monica; Kaup, Barbara

    2014-09-01

    Grounded models of language processing propose a strong connection between language and sensorimotor processes (Barsalou, 1999, 2008; Glenberg & Kaschak, 2002). However, it remains unclear how functional and automatic these connections are for understanding diverse sets of words (Ansorge, Kiefer, Khalid, Grassl, & König, 2010). Here, we investigate whether words referring to entities with a typical location in the upper or lower visual field (e.g., sun, ground) automatically influence subsequent motor responses even when language-processing levels are kept minimal. The results show that even subliminally presented words influence subsequent actions, as can be seen in a reversed compatibility effect. These finding have several implications for grounded language processing models. Specifically, these results suggest that language-action interconnections are not only the result of strategic language processes, but already play an important role during pre-attentional language processing stages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Marks, D. G.; Gurney, R. J.

    2009-12-01

    The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.

  1. The relationship between context, structure, and processes with outcomes of 6 regional diabetes networks in Europe

    PubMed Central

    Elkhuizen, Sylvia; van Dijk, Mattees; Vanhala, Antero; Karampli, Eleftheria; Faubel, Raquel; Forte, Paul; Coroian, Elena

    2018-01-01

    Background While health service provisioning for the chronic condition Type 2 Diabetes (T2D) often involves a network of organisations and professionals, most evidence on the relationships between the structures and processes of service provisioning and the outcomes considers single organisations or solo practitioners. Extending Donabedian’s Structure-Process-Outcome (SPO) model, we investigate how differences in quality of life, effective coverage of diabetes, and service satisfaction are associated with differences in the structures, processes, and context of T2D services in six regions in Finland, Germany, Greece, Netherlands, Spain, and UK. Methods Data collection consisted of: a) systematic modelling of provider network’s structures and processes, and b) a cross-sectional survey of patient reported outcomes and other information. The survey resulted in data from 1459 T2D patients, during 2011–2012. Stepwise linear regression models were used to identify how independent cumulative proportion of variance in quality of life and service satisfaction are related to differences in context, structure and process. The selected context, structure and process variables are based on Donabedian’s SPO model, a service quality research instrument (SERVQUAL), and previous organization and professional level evidence. Additional analysis deepens the possible bidirectional relation between outcomes and processes. Results The regression models explain 44% of variance in service satisfaction, mostly by structure and process variables (such as human resource use and the SERVQUAL dimensions). The models explained 23% of variance in quality of life between the networks, much of which is related to contextual variables. Our results suggest that effectiveness of A1c control is negatively correlated with process variables such as total hours of care provided per year and cost of services per year. Conclusions While the selected structure and process variables explain much of the variance in service satisfaction, this is less the case for quality of life. Moreover, it appears that the effect of the clinical outcome A1c control on processes is stronger than the other way around, as poorer control seems to relate to more service use, and higher cost. The standardized operational models used in this research prove to form a basis for expanding the network level evidence base for effective T2D service provisioning. PMID:29447220

  2. Crystal Growth of ZnSe by Physical Vapor Transport: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Ramachandran, Narayanan; Su, Ching-Hua

    1998-01-01

    Crystal growth from the vapor phase has various advantages over melt growth. The main advantage is from a lower processing temperature which makes the process more amenable in instances where the melting temperature of the crystal is high. Other benefits stem from the inherent purification mechanism in the process due to differences in the vapor pressures of the native elements and impurities, and the enhanced interfacial morphological stability during the growth process. Further, the implementation of Physical Vapor Transport (PVT) growth in closed ampoules affords experimental simplicity with minimal needs for complex process control which makes it an ideal candidate for space investigations in systems where gravity tends to have undesirable effects on the growth process. Bulk growth of wide band gap II-VI semiconductors by physical vapor transport has been developed and refined over the past several years at NASA MSFC. Results from a modeling study of PVT crystal growth of ZnSe arc reported in this paper. The PVI process is numerically investigated using both two-dimensional and fully three-dimensional formulation of the governing equations and associated boundary conditions. Both the incompressible Boussinesq approximation and the compressible model are tested to determine the influence of gravity on the process and to discern the differences between the two approaches. The influence of a residual gas is included in the models. The preliminary results show that both the incompressible and compressible approximations provide comparable results and the presence of a residual gas tends to measurably reduce the mass flux in the system. Detailed flow, thermal and concentration profiles will be provided in the final manuscript along with computed heat and mass transfer rates. Comparisons with the 1-D model will also be provided.

  3. Simple model of inhibition of chain-branching combustion processes

    NASA Astrophysics Data System (ADS)

    Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.

    2017-11-01

    A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.

  4. The effect of inclusion of inlets in dual drainage modelling

    NASA Astrophysics Data System (ADS)

    Chang, Tsang-Jung; Wang, Chia-Ho; Chen, Albert S.; Djordjević, Slobodan

    2018-04-01

    In coupled sewer and surface flood modelling approaches, the flow process in gullies is often ignored although the overland flow is drained to sewer network via inlets and gullies. Therefore, the flow entering inlets is transferred to the sewer network immediately, which may lead to a different flood estimation than the reality. In this paper, we compared two modelling approach with and without considering the flow processes in gullies in the coupled sewer and surface modelling. Three historical flood events were adopted for model calibration and validation. The results showed that the inclusion of flow process in gullies can further improve the accuracy of urban flood modelling.

  5. Modeling biogechemical reactive transport in a fracture zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molinero, Jorge; Samper, Javier; Yang, Chan Bing, and Zhang, Guoxiang

    2005-01-14

    A coupled model of groundwater flow, reactive solute transport and microbial processes for a fracture zone of the Aspo site at Sweden is presented. This is the model of the so-called Redox Zone Experiment aimed at evaluating the effects of tunnel construction on the geochemical conditions prevailing in a fracture granite. It is found that a model accounting for microbially-mediated geochemical processes is able to reproduce the unexpected measured increasing trends of dissolved sulfate and bicarbonate. The model is also useful for testing hypotheses regarding the role of microbial processes and evaluating the sensitivity of model results to changes inmore » biochemical parameters.« less

  6. A hybrid modeling system designed to support decision making in the optimization of extrusion of inhomogeneous materials

    NASA Astrophysics Data System (ADS)

    Kryuchkov, D. I.; Zalazinsky, A. G.

    2017-12-01

    Mathematical models and a hybrid modeling system are developed for the implementation of the experimental-calculation method for the engineering analysis and optimization of the plastic deformation of inhomogeneous materials with the purpose of improving metal-forming processes and machines. The created software solution integrates Abaqus/CAE, a subroutine for mathematical data processing, with the use of Python libraries and the knowledge base. Practical application of the software solution is exemplified by modeling the process of extrusion of a bimetallic billet. The results of the engineering analysis and optimization of the extrusion process are shown, the material damage being monitored.

  7. A Design Methodology for Medical Processes

    PubMed Central

    Bonacina, Stefano; Pozzi, Giuseppe; Pinciroli, Francesco; Marceglia, Sara

    2016-01-01

    Summary Background Healthcare processes, especially those belonging to the clinical domain, are acknowledged as complex and characterized by the dynamic nature of the diagnosis, the variability of the decisions made by experts driven by their experiences, the local constraints, the patient’s needs, the uncertainty of the patient’s response, and the indeterminacy of patient’s compliance to treatment. Also, the multiple actors involved in patient’s care need clear and transparent communication to ensure care coordination. Objectives In this paper, we propose a methodology to model healthcare processes in order to break out complexity and provide transparency. Methods The model is grounded on a set of requirements that make the healthcare domain unique with respect to other knowledge domains. The modeling methodology is based on three main phases: the study of the environmental context, the conceptual modeling, and the logical modeling. Results The proposed methodology was validated by applying it to the case study of the rehabilitation process of stroke patients in the specific setting of a specialized rehabilitation center. The resulting model was used to define the specifications of a software artifact for the digital administration and collection of assessment tests that was also implemented. Conclusions Despite being only an example, our case study showed the ability of process modeling to answer the actual needs in healthcare practices. Independently from the medical domain in which the modeling effort is done, the proposed methodology is useful to create high-quality models, and to detect and take into account relevant and tricky situations that can occur during process execution. PMID:27081415

  8. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  9. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  10. Modeling Business Processes of the Social Insurance Fund in Information System Runa WFE

    NASA Astrophysics Data System (ADS)

    Kataev, M. Yu; Bulysheva, L. A.; Xu, Li D.; Loseva, N. V.

    2016-08-01

    Introduction - Business processes are gradually becoming a tool that allows you at a new level to put employees or to make more efficient document management system. In these directions the main work, and presents the largest possible number of publications. However, business processes are still poorly implemented in public institutions, where it is very difficult to formalize the main existing processes. Us attempts to build a system of business processes for such state agencies as the Russian social insurance Fund (SIF), where virtually all of the processes, when different inputs have the same output: public service. The parameters of the state services (as a rule, time limits) are set by state laws and regulations. The article provides a brief overview of the FSS, the formulation of requirements to business processes, the justification of the choice of software for modeling business processes and create models of work in the system Runa WFE and optimization models one of the main business processes of the FSS. The result of the work of Runa WFE is an optimized model of the business process of FSS.

  11. Study of CFB Simulation Model with Coincidence at Multi-Working Condition

    NASA Astrophysics Data System (ADS)

    Wang, Z.; He, F.; Yang, Z. W.; Li, Z.; Ni, W. D.

    A circulating fluidized bed (CFB) two-stage simulation model was developed. To realize the model results coincident with the design value or real operation value at specified multi-working conditions and with capability of real-time calculation, only the main key processes were taken into account and the dominant factors were further abstracted out of these key processes. The simulation results showed a sound accordance at multi-working conditions, and confirmed the advantage of the two-stage model over the original single-stage simulation model. The combustion-support effect of secondary air was investigated using the two-stage model. This model provides a solid platform for investigating the pant-leg structured CFB furnace, which is now under design for a supercritical power plant.

  12. Estimating species - area relationships by modeling abundance and frequency subject to incomplete sampling.

    PubMed

    Yamaura, Yuichi; Connor, Edward F; Royle, J Andrew; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-07-01

    Models and data used to describe species-area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species-area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species-area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density-area relationships and occurrence probability-area relationships can alter the form of species-area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a variety of study designs and allows the inclusion of additional environmental covariates.

  13. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    USGS Publications Warehouse

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied to a variety of study designs and allows the inclusion of additional environmental covariates.

  14. The cloud-phase feedback in the Super-parameterized Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Burt, M. A.; Randall, D. A.

    2016-12-01

    Recent comparisons of observations and climate model simulations by I. Tan and colleagues have suggested that the Wegener-Bergeron-Findeisen (WBF) process tends to be too active in climate models, making too much cloud ice, and resulting in an exaggerated negative cloud-phase feedback on climate change. We explore the WBF process and its effect on shortwave cloud forcing in present-day and future climate simulations with the Community Earth System Model, and its super-parameterized counterpart. Results show that SP-CESM has much less cloud ice and a weaker cloud-phase feedback than CESM.

  15. [DESCRIPTION AND PRESENTATION OF THE RESULTS OF ELECTROENCEPHALOGRAM PROCESSING USING AN INFORMATION MODEL].

    PubMed

    Myznikov, I L; Nabokov, N L; Rogovanov, D Yu; Khankevich, Yu R

    2016-01-01

    The paper proposes to apply the informational modeling of correlation matrix developed by I.L. Myznikov in early 1990s in neurophysiological investigations, such as electroencephalogram recording and analysis, coherence description of signals from electrodes on the head surface. The authors demonstrate information models built using the data from studies of inert gas inhalation by healthy human subjects. In the opinion of the authors, information models provide an opportunity to describe physiological processes with a high level of generalization. The procedure of presenting the EEG results holds great promise for the broad application.

  16. Preform Characterization in VARTM Process Model Development

    NASA Technical Reports Server (NTRS)

    Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal; Loos, Alfred C.; Kellen, Charles B.; Jensen, Brian J.

    2004-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) is a Liquid Composite Molding (LCM) process where both resin injection and fiber compaction are achieved under pressures of 101.3 kPa or less. Originally developed over a decade ago for marine composite fabrication, VARTM is now considered a viable process for the fabrication of aerospace composites (1,2). In order to optimize and further improve the process, a finite element analysis (FEA) process model is being developed to include the coupled phenomenon of resin flow, preform compaction and resin cure. The model input parameters are obtained from resin and fiber-preform characterization tests. In this study, the compaction behavior and the Darcy permeability of a commercially available carbon fabric are characterized. The resulting empirical model equations are input to the 3- Dimensional Infiltration, version 5 (3DINFILv.5) process model to simulate infiltration of a composite panel.

  17. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  18. BP fusion model for the detection of oil spills on the sea by remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin

    2003-06-01

    Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.

  19. AQUATOX Features and Tools

    EPA Pesticide Factsheets

    Numerous features have been included to facilitate the modeling process, from model setup and data input, presentation and analysis of results, to easy export of results to spreadsheet programs for additional analysis.

  20. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  1. Modeling biological gradient formation: combining partial differential equations and Petri nets.

    PubMed

    Bertens, Laura M F; Kleijn, Jetty; Hille, Sander C; Heiner, Monika; Koutny, Maciej; Verbeek, Fons J

    2016-01-01

    Both Petri nets and differential equations are important modeling tools for biological processes. In this paper we demonstrate how these two modeling techniques can be combined to describe biological gradient formation. Parameters derived from partial differential equation describing the process of gradient formation are incorporated in an abstract Petri net model. The quantitative aspects of the resulting model are validated through a case study of gradient formation in the fruit fly.

  2. Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli.

    PubMed

    Minakata, Katsumi; Gondan, Matthias

    2018-05-01

    When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.

  3. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  4. Wind Assessment for Aerial Payload Delivery Systems Using GPS and IMU Sensors

    DTIC Science & Technology

    2016-09-01

    post- processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models ...processing of the resultant test data were the research methods used in development of this thesis . Ultimately, this thesis presents two models for winds...7  E .  THESIS OBJECTIVE AND ORGANIZATION ................................. 7  II.  BLIZZARD SYSTEM COMPONENTS

  5. The relationship between quality management practices and organisational performance: A structural equation modelling approach

    NASA Astrophysics Data System (ADS)

    Jamaluddin, Z.; Razali, A. M.; Mustafa, Z.

    2015-02-01

    The purpose of this paper is to examine the relationship between the quality management practices (QMPs) and organisational performance for the manufacturing industry in Malaysia. In this study, a QMPs and organisational performance framework is developed according to a comprehensive literature review which cover aspects of hard and soft quality factors in manufacturing process environment. A total of 11 hypotheses have been put forward to test the relationship amongst the six constructs, which are management commitment, training, process management, quality tools, continuous improvement and organisational performance. The model is analysed using Structural Equation Modeling (SEM) with AMOS software version 18.0 using Maximum Likelihood (ML) estimation. A total of 480 questionnaires were distributed, and 210 questionnaires were valid for analysis. The results of the modeling analysis using ML estimation indicate that the fits statistics of QMPs and organisational performance model for manufacturing industry is admissible. From the results, it found that the management commitment have significant impact on the training and process management. Similarly, the training had significant effect to the quality tools, process management and continuous improvement. Furthermore, the quality tools have significant influence on the process management and continuous improvement. Likewise, the process management also has a significant impact to the continuous improvement. In addition the continuous improvement has significant influence the organisational performance. However, the results of the study also found that there is no significant relationship between management commitment and quality tools, and between the management commitment and continuous improvement. The results of the study can be used by managers to prioritize the implementation of QMPs. For instances, those practices that are found to have positive impact on organisational performance can be recommended to managers so that they can allocate resources to improve these practices to get better performance.

  6. Numerical Simulation of Austempering Heat Treatment of a Ductile Cast Iron

    NASA Astrophysics Data System (ADS)

    Boccardo, Adrián D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.; Górny, Marcin; Tyrała, Edward

    2016-02-01

    This paper presents a coupled thermo-mechanical-metallurgical formulation to predict the dimensional changes and microstructure of a ductile cast iron part as a consequence of an austempering heat process. To take into account the different complex phenomena which are present in the process, the stress-strain law and plastic evolution equations are defined within the context of the associate rate-independent thermo-plasticity theory. The metallurgical model considers the reverse eutectoid, ausferritic, and martensitic transformations using macro- and micro-models. The resulting model is solved using the finite element method. The performance of this model is evaluated by comparison with experimental results of a dilatometric test. The results indicate that both the experimental evolution of deformation and temperature are well represented by the numerical model.

  7. Redesigning the United States Marine Corps Contingency Contracting Process of Knowledge Sharing and Tool Usage

    DTIC Science & Technology

    2001-12-01

    Group 1999, Davenport and Prusak 1998). Although differences do exist, the four models are similar. In the amalgamated model , the phases of the KMLC...15 Phase 1, create, is the discovery and development of new knowledge (Despres and Chavel 1999, Gartner Group 1999). Phase 2, organize, involves...This generally entails modeling and analysis that results in one or more (re)designs for the process in question. The process, along with

  8. Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.

    PubMed

    Petric, Ivan; Mustafić, Nesib

    2015-09-15

    Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.; Britt, J.; Birkmire, R.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less

  10. Evolutionary inference via the Poisson Indel Process.

    PubMed

    Bouchard-Côté, Alexandre; Jordan, Michael I

    2013-01-22

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.

  11. Evolutionary inference via the Poisson Indel Process

    PubMed Central

    Bouchard-Côté, Alexandre; Jordan, Michael I.

    2013-01-01

    We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114–124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments. PMID:23275296

  12. Comparing single- and dual-process models of memory development.

    PubMed

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  13. Validation of a multi-phase plant-wide model for the description of the aeration process in a WWTP.

    PubMed

    Lizarralde, I; Fernández-Arévalo, T; Beltrán, S; Ayesa, E; Grau, P

    2018-02-01

    This paper introduces a new mathematical model built under the PC-PWM methodology to describe the aeration process in a full-scale WWTP. This methodology enables a systematic and rigorous incorporation of chemical and physico-chemical transformations into biochemical process models, particularly for the description of liquid-gas transfer to describe the aeration process. The mathematical model constructed is able to reproduce biological COD and nitrogen removal, liquid-gas transfer and chemical reactions. The capability of the model to describe the liquid-gas mass transfer has been tested by comparing simulated and experimental results in a full-scale WWTP. Finally, an exploration by simulation has been undertaken to show the potential of the mathematical model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Reversibility in Quantum Models of Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Gier, David; Crutchfield, James; Mahoney, John; James, Ryan

    Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.

  15. The contribution of temporary storage and executive processes to category learning.

    PubMed

    Wang, Tengfei; Ren, Xuezhu; Schweizer, Karl

    2015-09-01

    Three distinctly different working memory processes, temporary storage, mental shifting and inhibition, were proposed to account for individual differences in category learning. A sample of 213 participants completed a classic category learning task and two working memory tasks that were experimentally manipulated for tapping specific working memory processes. Fixed-links models were used to decompose data of the category learning task into two independent components representing basic performance and improvement in performance in category learning. Processes of working memory were also represented by fixed-links models. In a next step the three working memory processes were linked to components of category learning. Results from modeling analyses indicated that temporary storage had a significant effect on basic performance and shifting had a moderate effect on improvement in performance. In contrast, inhibition showed no effect on any component of the category learning task. These results suggest that temporary storage and the shifting process play different roles in the course of acquiring new categories. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  17. Intake flow modeling in a four stroke diesel using KIVA3

    NASA Technical Reports Server (NTRS)

    Hessel, R. P.; Rutland, C. J.

    1993-01-01

    Intake flow for a dual intake valved diesel engine is modeled using moving valves and realistic geometries. The objectives are to obtain accurate initial conditions for combustion calculations and to provide a tool for studying intake processes. Global simulation parameters are compared with experimental results and show good agreement. The intake process shows a 30 percent difference in mass flows and average swirl in opposite directions across the two intake valves. The effect of the intake process on the flow field at the end of compression is examined. Modeling the intake flow results in swirl and turbulence characteristics that are quite different from those obtained by conventional methods in which compression stroke initial conditions are assumed.

  18. Stacked dielectric elastomer actuator (SDEA): casting process, modeling and active vibration isolation

    NASA Astrophysics Data System (ADS)

    Li, Zhuoyuan; Sheng, Meiping; Wang, Minqing; Dong, Pengfei; Li, Bo; Chen, Hualing

    2018-07-01

    In this paper, a novel fabrication process of stacked dielectric elastomer actuator (SDEA) is developed based on casting process and elastomeric electrode. The so-fabricated SDEA benefits the advantages of homogenous and reproducible properties as well as little performance degradation after one-year use. A coupling model of SDEA is established by taking into consideration of the elastomeric electrode and the calculated results agree with the experiments. Based on the model, we attain the method to optimize the SDEA’s parameters. Finally, the SDEA is used as an isolator in active vibration isolation system to verify the feasibility in dynamic application. And the experiment results show a great prospect for SDEA in such application.

  19. Mathematical model with autoregressive process for electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Evaristo, Ronaldo M.; Batista, Antonio M.; Viana, Ricardo L.; Iarosz, Kelly C.; Szezech, José D., Jr.; Godoy, Moacir F. de

    2018-04-01

    The cardiovascular system is composed of the heart, blood and blood vessels. Regarding the heart, cardiac conditions are determined by the electrocardiogram, that is a noninvasive medical procedure. In this work, we propose autoregressive process in a mathematical model based on coupled differential equations in order to obtain the tachograms and the electrocardiogram signals of young adults with normal heartbeats. Our results are compared with experimental tachogram by means of Poincaré plot and dentrended fluctuation analysis. We verify that the results from the model with autoregressive process show good agreement with experimental measures from tachogram generated by electrical activity of the heartbeat. With the tachogram we build the electrocardiogram by means of coupled differential equations.

  20. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Gonnenthal; N. Spyoher

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THCmore » Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less

  1. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. Sonnenthale

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M&O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THCmore » seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less

  2. A methodological framework to support the initiation, design and institutionalization of participatory modeling processes in water resources management

    NASA Astrophysics Data System (ADS)

    Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan

    2018-01-01

    Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.

  3. Launch Site Computer Simulation and its Application to Processes

    NASA Technical Reports Server (NTRS)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  4. Oxygen production System Models for Lunar ISRU

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo

    2007-01-01

    In-Situ Resource Utilization (ISRU) seeks to make human space exploration feasible; by using available resources from a planet or the moon to produce consumables, parts, and structures that otherwise would be brought from Earth. Producing these in situ reduces the mass of such that must be launched and doing so allows more payload mass' for each mission. The production of oxygen from lunar regolith, for life support and propellant, is one of the tasks being studied under ISRU. NASA is currently funding three processes that have shown technical merit for the production of oxygen from regolith: Molten Salt Electrolysis, Hydrogen Reduction of Ilmenite, and Carbothermal Reduction. The ISRU program is currently developing system models of, the , abovementioned processes to: (1) help NASA in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the oxygen production process, (4) provide estimates on energy and power requirements, mass and volume.of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters, and (5) integrate into the overall end-to-end ISRU system model, which could be integrated with mission architecture models. The oxygen production system model is divided into modules that represent unit operations (e.g., reactor, water electrolyzer, heat exchanger). Each module is modeled theoretically using Excel and Visual Basic for Applications (VBA), and will be validated using experimental data from on-going laboratory work. This modularity (plug-n-play) feature of each unit operation allows the use of the same model on different oxygen production systems simulations resulting in comparable results. In this presentation, preliminary results for mass, power, volume will be presented along with brief description of the oxygen production system model.

  5. Simulation of anaerobic digestion processes using stochastic algorithm.

    PubMed

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  6. Prediction of Tensile Strength of Friction Stir Weld Joints with Adaptive Neuro-Fuzzy Inference System (ANFIS) and Neural Network

    NASA Technical Reports Server (NTRS)

    Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.

    2015-01-01

    Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.

  7. Integrated modelling of anthropogenic land-use and land-cover change on the global scale

    NASA Astrophysics Data System (ADS)

    Schaldach, R.; Koch, J.; Alcamo, J.

    2009-04-01

    In many cases land-use activities go hand in hand with substantial modifications of the physical and biological cover of the Earth's surface, resulting in direct effects on energy and matter fluxes between terrestrial ecosystems and the atmosphere. For instance, the conversion of forest to cropland is changing climate relevant surface parameters (e.g. albedo) as well as evapotranspiration processes and carbon flows. In turn, human land-use decisions are also influenced by environmental processes. Changing temperature and precipitation patterns for example are important determinants for location and intensity of agriculture. Due to these close linkages, processes of land-use and related land-cover change should be considered as important components in the construction of Earth System models. A major challenge in modelling land-use change on the global scale is the integration of socio-economic aspects and human decision making with environmental processes. One of the few global approaches that integrates functional components to represent both anthropogenic and environmental aspects of land-use change, is the LandSHIFT model. It simulates the spatial and temporal dynamics of the human land-use activities settlement, cultivation of food crops and grazing management, which compete for the available land resources. The rational of the model is to regionalize the demands for area intensive commodities (e.g. crop production) and services (e.g. space for housing) from the country-level to a global grid with the spatial resolution of 5 arc-minutes. The modelled land-use decisions within the agricultural sector are influenced by changing climate and the resulting effects on biomass productivity. Currently, this causal chain is modelled by integrating results from the process-based vegetation model LPJmL model for changing crop yields and net primary productivity of grazing land. Model output of LandSHIFT is a time series of grid maps with land-use/land-cover information that can serve as basis for further impact analysis. An exemplary simulation study with LandSHIFT is presented, based on scenario assumptions from the UNEP Global Environmental Outlook 4. Time horizon of the analysis is the year 2050. Changes of future food production on country level are computed by the agro-economy model IMPACT as a function of demography, economic development and global trade pattern. Together with scenario assumptions on climatic change and population growth, this data serves as model input to compute the changing land-use und land-cover. The continental and global scale model results are then analysed with respect to changes in the spatial pattern of natural vegetation as well as the resulting effects on evapotranspiration processes and land surface parameters. Furthermore, possible linkages of LandSHIFT to the different components of Earth System models (e.g. climate and natural vegetation) are discussed.

  8. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    USGS Publications Warehouse

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  9. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    NASA Astrophysics Data System (ADS)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  10. Hot Isostatic Press Manufacturing Process Development for Fabrication of RERTR Monolithic Fuel Plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crapps, Justin M.; Clarke, Kester D.; Katz, Joel D.

    2012-06-06

    We use experimentation and finite element modeling to study a Hot Isostatic Press (HIP) manufacturing process for U-10Mo Monolithic Fuel Plates. Finite element simulations are used to identify the material properties affecting the process and improve the process geometry. Accounting for the high temperature material properties and plasticity is important to obtain qualitative agreement between model and experimental results. The model allows us to improve the process geometry and provide guidance on selection of material and finish conditions for the process strongbacks. We conclude that the HIP can must be fully filled to provide uniform normal stress across the bondingmore » interface.« less

  11. CFD Analysis of nanofluid forced convection heat transport in laminar flow through a compact pipe

    NASA Astrophysics Data System (ADS)

    Yu, Kitae; Park, Cheol; Kim, Sedon; Song, Heegun; Jeong, Hyomin

    2017-08-01

    In the present paper, developing laminar forced convection flows were numerically investigated by using water-Al2O3 nano-fluid through a circular compact pipe which has 4.5mm diameter. Each model has a steady state and uniform heat flux (UHF) at the wall. The whole numerical experiments were processed under the Re = 1050 and the nano-fluid models were made by the Alumina volume fraction. A single-phase fluid models were defined through nano-fluid physical and thermal properties calculations, Two-phase model(mixture granular model) were processed in 100nm diameter. The results show that Nusselt number and heat transfer rate are improved as the Al2O3 volume fraction increased. All of the numerical flow simulations are processed by the FLUENT. The results show the increment of thermal transfer from the volume fraction concentration.

  12. Incorporating signal-dependent noise for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Morman, Christopher J.; Meola, Joseph

    2015-05-01

    The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.

  13. CFD Modeling of Boron Removal from Liquid Silicon with Cold Gases and Plasma

    NASA Astrophysics Data System (ADS)

    Vadon, Mathieu; Sortland, Øyvind; Nuta, Ioana; Chatillon, Christian; Tansgtad, Merete; Chichignoud, Guy; Delannoy, Yves

    2018-03-01

    The present study focuses on a specific step of the metallurgical path of purification to provide solar-grade silicon: the removal of boron through the injection of H2O(g)-H2(g)-Ar(g) (cold gas process) or of Ar-H2-O2 plasma (plasma process) on stirred liquid silicon. We propose a way to predict silicon and boron flows from the liquid silicon surface by using a CFD model (©Ansys Fluent) combined with some results on one-dimensional diffusive-reactive models to consider the formation of silica aerosols in a layer above the liquid silicon. The comparison of the model with experimental results on cold gas processes provided satisfying results for cases with low and high concentrations of oxidants. This confirms that the choices of thermodynamic data of HBO(g) and the activity coefficient of boron in liquid silicon are suitable and that the hypotheses regarding similar diffusion mechanisms at the surface for HBO(g) and SiO(g) are appropriate. The reasons for similar diffusion mechanisms need further enquiry. We also studied the effect of pressure and geometric variations in the cold gas process. For some cases with high injection flows, the model slightly overestimates the boron extraction rate, and the overestimation increases with increasing injection flow. A single plasma experiment from SIMaP (France) was modeled, and the model results fit the experimental data on purification if we suppose that aerosols form, but it is not enough to draw conclusions about the formation of aerosols for plasma experiments.

  14. CFD Modeling of Boron Removal from Liquid Silicon with Cold Gases and Plasma

    NASA Astrophysics Data System (ADS)

    Vadon, Mathieu; Sortland, Øyvind; Nuta, Ioana; Chatillon, Christian; Tansgtad, Merete; Chichignoud, Guy; Delannoy, Yves

    2018-06-01

    The present study focuses on a specific step of the metallurgical path of purification to provide solar-grade silicon: the removal of boron through the injection of H2O(g)-H2(g)-Ar(g) (cold gas process) or of Ar-H2-O2 plasma (plasma process) on stirred liquid silicon. We propose a way to predict silicon and boron flows from the liquid silicon surface by using a CFD model (©Ansys Fluent) combined with some results on one-dimensional diffusive-reactive models to consider the formation of silica aerosols in a layer above the liquid silicon. The comparison of the model with experimental results on cold gas processes provided satisfying results for cases with low and high concentrations of oxidants. This confirms that the choices of thermodynamic data of HBO(g) and the activity coefficient of boron in liquid silicon are suitable and that the hypotheses regarding similar diffusion mechanisms at the surface for HBO(g) and SiO(g) are appropriate. The reasons for similar diffusion mechanisms need further enquiry. We also studied the effect of pressure and geometric variations in the cold gas process. For some cases with high injection flows, the model slightly overestimates the boron extraction rate, and the overestimation increases with increasing injection flow. A single plasma experiment from SIMaP (France) was modeled, and the model results fit the experimental data on purification if we suppose that aerosols form, but it is not enough to draw conclusions about the formation of aerosols for plasma experiments.

  15. UAH mathematical model of the variable polarity plasma ARC welding system calculation

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1994-01-01

    Significant advantages of Variable Polarity Plasma Arc (VPPA) welding process include faster welding, fewer repairs, less joint preparation, reduced weldment distortion, and absence of porosity. A mathematical model is presented to analyze the VPPA welding process. Results of the mathematical model were compared with the experimental observation accomplished by the GDI team.

  16. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  17. Use of Words and Visuals in Modelling Context of Annual Plant

    ERIC Educational Resources Information Center

    Park, Jungeun; DiNapoli, Joseph; Mixell, Robert A.; Flores, Alfinio

    2017-01-01

    This study looks at the various verbal and non-verbal representations used in a process of modelling the number of annual plants over time. Analysis focuses on how various representations such as words, diagrams, letters and mathematical equations evolve in the mathematization process of the modelling context. Our results show that (1) visual…

  18. A functional-structural model of rice linking quantitative genetic information with morphological development and physiological processes.

    PubMed

    Xu, Lifeng; Henke, Michael; Zhu, Jun; Kurth, Winfried; Buck-Sorlin, Gerhard

    2011-04-01

    Although quantitative trait loci (QTL) analysis of yield-related traits for rice has developed rapidly, crop models using genotype information have been proposed only relatively recently. As a first step towards a generic genotype-phenotype model, we present here a three-dimensional functional-structural plant model (FSPM) of rice, in which some model parameters are controlled by functions describing the effect of main-effect and epistatic QTLs. The model simulates the growth and development of rice based on selected ecophysiological processes, such as photosynthesis (source process) and organ formation, growth and extension (sink processes). It was devised using GroIMP, an interactive modelling platform based on the Relational Growth Grammar formalism (RGG). RGG rules describe the course of organ initiation and extension resulting in final morphology. The link between the phenotype (as represented by the simulated rice plant) and the QTL genotype was implemented via a data interface between the rice FSPM and the QTLNetwork software, which computes predictions of QTLs from map data and measured trait data. Using plant height and grain yield, it is shown how QTL information for a given trait can be used in an FSPM, computing and visualizing the phenotypes of different lines of a mapping population. Furthermore, we demonstrate how modification of a particular trait feeds back on the entire plant phenotype via the physiological processes considered. We linked a rice FSPM to a quantitative genetic model, thereby employing QTL information to refine model parameters and visualizing the dynamics of development of the entire phenotype as a result of ecophysiological processes, including the trait(s) for which genetic information is available. Possibilities for further extension of the model, for example for the purposes of ideotype breeding, are discussed.

  19. Why Is Improvement of Earth System Models so Elusive? Challenges and Strategies from Dust Aerosol Modeling

    NASA Technical Reports Server (NTRS)

    Miller, Ronald L.; Garcia-Pando, Carlos Perez; Perlwitz, Jan; Ginoux, Paul

    2015-01-01

    Past decades have seen an accelerating increase in computing efficiency, while climate models are representing a rapidly widening set of physical processes. Yet simulations of some fundamental aspects of climate like precipitation or aerosol forcing remain highly uncertain and resistant to progress. Dust aerosol modeling of soil particles lofted by wind erosion has seen a similar conflict between increasing model sophistication and remaining uncertainty. Dust aerosols perturb the energy and water cycles by scattering radiation and acting as ice nuclei, while mediating atmospheric chemistry and marine photosynthesis (and thus the carbon cycle). These effects take place across scales from the dimensions of an ice crystal to the planetary-scale circulation that disperses dust far downwind of its parent soil. Representing this range leads to several modeling challenges. Should we limit complexity in our model, which consumes computer resources and inhibits interpretation? How do we decide if a process involving dust is worthy of inclusion within our model? Can we identify a minimal representation of a complex process that is efficient yet retains the physics relevant to climate? Answering these questions about the appropriate degree of representation is guided by model evaluation, which presents several more challenges. How do we proceed if the available observations do not directly constrain our process of interest? (This could result from competing processes that influence the observed variable and obscure the signature of our process of interest.) Examples will be presented from dust modeling, with lessons that might be more broadly applicable. The end result will either be clinical depression or there assuring promise of continued gainful employment as the community confronts these challenges.

  20. From local hydrological process analysis to regional hydrological model application in Benin: Concept, results and perspectives

    NASA Astrophysics Data System (ADS)

    Bormann, H.; Faß, T.; Giertz, S.; Junge, B.; Diekkrüger, B.; Reichert, B.; Skowronek, A.

    This paper presents the concept, first results and perspectives of the hydrological sub-project of the IMPETUS-Benin project which is part of the GLOWA program funded by the German ministry of education and research. In addition to the research concept, first results on field hydrology, pedology, hydrogeology and hydrological modelling are presented, focusing on the understanding of the actual hydrological processes. For analysing the processes a 30 km 2 catchment acting as a super test site was chosen which is assumed to be representative for the entire catchment of about 15,000 km 2. First results of the field investigations show that infiltration, runoff generation and soil erosion strongly depend on land cover and land use which again influence the soil properties significantly. A conceptual hydrogeological model has been developed summarising the process knowledge on runoff generation and subsurface hydrological processes. This concept model shows a dominance of fast runoff components (surface runoff and interflow), a groundwater recharge along preferential flow paths, temporary interaction between surface and groundwater and separate groundwater systems on different scales (shallow, temporary groundwater on local scale and permanent, deep groundwater on regional scale). The findings of intensive measurement campaigns on soil hydrology, groundwater dynamics and soil erosion have been integrated into different, scale-dependent hydrological modelling concepts applied at different scales in the target region (upper Ouémé catchment in Benin, about 15,000 km 2). The models have been applied and successfully validated. They will be used for integrated scenario analyses in the forthcoming project phase to assess the impacts of global change on the regional water cycle and on typical problem complexes such as food security in West African countries.

  1. Modelling tidewater glacier calving: from detailed process models to simple calving laws

    NASA Astrophysics Data System (ADS)

    Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh

    2017-04-01

    The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.

  2. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    PubMed

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  3. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do

    PubMed Central

    2017-01-01

    Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113

  4. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  5. Polychlorinated Biphenyls in a Temperate Alpine Glacier: 2. Model Results of Chemical Fate Processes.

    PubMed

    Steinlin, Christine; Bogdal, Christian; Pavlova, Pavlina A; Schwikowski, Margit; Lüthi, Martin P; Scheringer, Martin; Schmid, Peter; Hungerbühler, Konrad

    2015-12-15

    We present results from a chemical fate model quantifying incorporation of polychlorinated biphenyls (PCBs) into the Silvretta glacier, a temperate Alpine glacier located in Switzerland. Temperate glaciers, in contrast to cold glaciers, are glaciers where melt processes are prevalent. Incorporation of PCBs into cold glaciers has been quantified in previous studies. However, the fate of PCBs in temperate glaciers has never been investigated. In the model, we include melt processes, inducing elution of water-soluble substances and, conversely, enrichment of particles and particle-bound chemicals. The model is validated by comparing modeled and measured PCB concentrations in an ice core collected in the Silvretta accumulation area. We quantify PCB incorporation between 1900 and 2010, and discuss the fate of six PCB congeners. PCB concentrations in the ice core peak in the period of high PCB emissions, as well as in years with strong melt. While for lower-chlorinated PCB congeners revolatilization is important, for higher-chlorinated congeners, the main processes are storage in glacier ice and removal by particle runoff. This study gives insight into PCB fate and dynamics and reveals the effect of snow accumulation and melt processes on the fate of semivolatile organic chemicals in a temperate Alpine glacier.

  6. Approximate Model of Zone Sedimentation

    NASA Astrophysics Data System (ADS)

    Dzianik, František

    2011-12-01

    The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.

  7. Measurement and analysis of applied power, forces and material response in friction stir welding of aluminum alloy 6061

    NASA Astrophysics Data System (ADS)

    Avila, Ricardo E.

    The process of Friction Stir Welding (FSW) 6061 aluminum alloy is investigated, with focus on the forces and power being applied in the process and the material response. The main objective is to relate measurements of the forces and power applied in the process with mechanical properties of the material during the dynamic process, based on mathematical modeling and aided by computer simulations, using the LS-DYNA software for finite element modeling. Results of measurements of applied forces and power are presented. The result obtained for applied power is used in the construction of a mechanical variational model of FSW, in which minimization of a functional for the applied torque is sought, leading to an expression for shear stress in the material. The computer simulations are performed by application of the Smoothed Particle Hydrodynamics (SPH) method, in which no structured finite element mesh is used to construct a spatial discretization of the model. The current implementation of SPH in LS-DYNA allows a structural solution using a plastic kinematic material model. This work produces information useful to improve understanding of the material flow in the process, and thus adds to current knowledge about the behavior of materials under processes of severe plastic deformation, particularly those processes in which deformation occurs mainly by application of shear stress, aided by thermoplastic strain localization and dynamic recrystallization.

  8. Conversion of municipal solid waste to hydrogen

    NASA Astrophysics Data System (ADS)

    Richardson, J. H.; Rogers, R. S.; Thorsness, C. B.

    1995-04-01

    LLNL and Texaco are cooperatively developing a physical and chemical treatment method for the conversion of municipal solid waste (MSW) to hydrogen via the steps of hydrothermal pretreatment, gasification and purification. LLNL's focus has been on hydrothermal pretreatment of MSW in order to prepare a slurry of suitable viscosity and heating value to allow efficient and economical gasification and hydrogen production. The project has evolved along 3 parallel paths: laboratory scale experiments, pilot scale processing, and process modeling. Initial laboratory-scale MSW treatment results (e.g., viscosity, slurry solids content) over a range of temperatures and times with newspaper and plastics will be presented. Viscosity measurements have been correlated with results obtained at MRL. A hydrothermal treatment pilot facility has been rented from Texaco and is being reconfigured at LLNL; the status of that facility and plans for initial runs will be described. Several different operational scenarios have been modeled. Steady state processes have been modeled with ASPEN PLUS; consideration of steam injection in a batch mode was handled using continuous process modules. A transient model derived from a general purpose packed bed model is being developed which can examine the aspects of steam heating inside the hydrothermal reactor vessel. These models have been applied to pilot and commercial scale scenarios as a function of MSW input parameters and have been used to outline initial overall economic trends. Part of the modeling, an overview of the MSW gasification process and the modeling of the MSW as a process material, was completed by a DOE SERS (Science and Engineering Research Semester) student. The ultimate programmatic goal is the technical demonstration of the gasification of MSW to hydrogen at the laboratory and pilot scale and the economic analysis of the commercial feasibility of such a process.

  9. Joint modelling of longitudinal CEA tumour marker progression and survival data on breast cancer

    NASA Astrophysics Data System (ADS)

    Borges, Ana; Sousa, Inês; Castro, Luis

    2017-06-01

    This work proposes the use of Biostatistics methods to study breast cancer in patients of Braga's Hospital Senology Unit, located in Portugal. The primary motivation is to contribute to the understanding of the progression of breast cancer, within the Portuguese population, using a more complex statistical model assumptions than the traditional analysis that take into account a possible existence of a serial correlation structure within a same subject observations. We aim to infer which risk factors aect the survival of Braga's Hospital patients, diagnosed with breast tumour. Whilst analysing risk factors that aect a tumour markers used on the surveillance of disease progression the Carcinoembryonic antigen (CEA). As survival and longitudinal processes may be associated, it is important to model these two processes together. Hence, a joint modelling of these two processes to infer on the association of these was conducted. A data set of 540 patients, along with 50 variables, was collected from medical records of the Hospital. A joint model approach was used to analyse these data. Two dierent joint models were applied to the same data set, with dierent parameterizations which give dierent interpretations to model parameters. These were used by convenience as the ones implemented in R software. Results from the two models were compared. Results from joint models, showed that the longitudinal CEA values were signicantly associated with the survival probability of these patients. A comparison between parameter estimates obtained in this analysis and previous independent survival[4] and longitudinal analysis[5][6], lead us to conclude that independent analysis brings up bias parameter estimates. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary. Results indicate that the longitudinal progression of CEA is signicantly associated with the probability of survival of these patients. Hence, an assumption of association between the two processes in a joint model of breast cancer data is necessary.

  10. Dynamic occupancy models for explicit colonization processes

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Johnson, Devin S.; Altwegg, Res; Conquest, Loveday

    2016-01-01

    The dynamic, multi-season occupancy model framework has become a popular tool for modeling open populations with occupancies that change over time through local colonizations and extinctions. However, few versions of the model relate these probabilities to the occupancies of neighboring sites or patches. We present a modeling framework that incorporates this information and is capable of describing a wide variety of spatiotemporal colonization and extinction processes. A key feature of the model is that it is based on a simple set of small-scale rules describing how the process evolves. The result is a dynamic process that can account for complicated large-scale features. In our model, a site is more likely to be colonized if more of its neighbors were previously occupied and if it provides more appealing environmental characteristics than its neighboring sites. Additionally, a site without occupied neighbors may also become colonized through the inclusion of a long-distance dispersal process. Although similar model specifications have been developed for epidemiological applications, ours formally accounts for detectability using the well-known occupancy modeling framework. After demonstrating the viability and potential of this new form of dynamic occupancy model in a simulation study, we use it to obtain inference for the ongoing Common Myna (Acridotheres tristis) invasion in South Africa. Our results suggest that the Common Myna continues to enlarge its distribution and its spread via short distance movement, rather than long-distance dispersal. Overall, this new modeling framework provides a powerful tool for managers examining the drivers of colonization including short- vs. long-distance dispersal, habitat quality, and distance from source populations.

  11. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  12. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  13. Domain Modeling and Application Development of an Archetype- and XML-based EHRS. Practical Experiences and Lessons Learnt.

    PubMed

    Kropf, Stefan; Chalopin, Claire; Lindner, Dirk; Denecke, Kerstin

    2017-06-28

    Access to patient data within the hospital or between hospitals is still problematic since a variety of information systems is in use applying different vendor specific terminologies and underlying knowledge models. Beyond, the development of electronic health record systems (EHRSs) is time and resource consuming. Thus, there is a substantial need for a development strategy of standardized EHRSs. We are applying a reuse-oriented process model and demonstrate its feasibility and realization on a practical medical use case, which is an EHRS holding all relevant data arising in the context of treatment of tumors of the sella region. In this paper, we describe the development process and our practical experiences. Requirements towards the development of the EHRS were collected by interviews with a neurosurgeon and patient data analysis. For modelling of patient data, we selected openEHR as standard and exploited the software tools provided by the openEHR foundation. The patient information model forms the core of the development process, which comprises the EHR generation and the implementation of an EHRS architecture. Moreover, a reuse-oriented process model from the business domain was adapted to the development of the EHRS. The reuse-oriented process model is a model for a suitable abstraction of both, modeling and development of an EHR centralized EHRS. The information modeling process resulted in 18 archetypes that were aggregated in a template and built the boilerplate of the model driven development. The EHRs and the EHRS were developed by openEHR and W3C standards, tightly supported by well-established XML techniques. The GUI of the final EHRS integrates and visualizes information from various examinations, medical reports, findings and laboratory test results. We conclude that the development of a standardized overarching EHR and an EHRS is feasible using openEHR and W3C standards, enabling a high degree of semantic interoperability. The standardized representation visualizes data and can in this way support the decision process of clinicians.

  14. A systems-based approach for integrated design of materials, products and design process chains

    NASA Astrophysics Data System (ADS)

    Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh

    2007-12-01

    The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.

  15. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  16. Data Needs and Modeling of the Upper Atmosphere

    NASA Astrophysics Data System (ADS)

    Brunger, M. J.; Campbell, L.

    2007-04-01

    We present results from our enhanced statistical equilibrium and time-step codes for atmospheric modeling. In particular we use these results to illustrate the role of electron-driven processes in atmospheric phenomena and the sensitivity of the model results to data inputs such as integral cross sections, dissociative recombination rates and chemical reaction rates.

  17. Analysis of vehicle's safety envelope under car-following model

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Zhang, Jian; Chen, Liang; Shang, Hua-Yan

    2017-05-01

    In this paper, we propose an improved car-following model to explore the impacts of vehicle's two safety distances (i.e., the front safety distance and back safety distance) on the traffic safety during the starting process. The numerical results show that our model is prominently safer than the FVD (full velocity difference) model, i.e., our model is better than the FVD model from the perspective of the traffic safety, which shows that each driver should consider his two safety distances during his driving process.

  18. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  19. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  20. Intercomparison of the community multiscale air quality model and CALGRID using process analysis.

    PubMed

    O'Neill, Susan M; Lamb, Brian K

    2005-08-01

    This study was designed to examine the similarities and differences between two advanced photochemical air quality modeling systems: EPA Models-3/CMAQ and CALGRID/CALMET. Both modeling systems were applied to an ozone episode that occurred along the I-5 urban corridor in western Washington and Oregon during July 11-14, 1996. Both models employed the same modeling domain and used the same detailed gridded emission inventory. The CMAQ model was run using both the CB-IV and RADM2 chemical mechanisms, while CALGRID was used with the SAPRC-97 chemical mechanism. Outputfrom the Mesoscale Meteorological Model (MM5) employed with observational nudging was used in both models. The two modeling systems, representing three chemical mechanisms and two sets of meteorological inputs, were evaluated in terms of statistical performance measures for both 1- and 8-h average observed ozone concentrations. The results showed that the different versions of the systems were more similar than different, and all versions performed well in the Portland region and downwind of Seattle but performed poorly in the more rural region north of Seattle. Improving the meteorological input into the CALGRID/CALMET system with planetary boundary layer (PBL) parameters from the Models-3/CMAQ meteorology preprocessor (MCIP) improved the performance of the CALGRID/CALMET system. The 8-h ensemble case was often the best performer of all the cases indicating that the models perform better over longer analysis periods. The 1-h ensemble case, derived from all runs, was not necessarily an improvement over the five individual cases, but the standard deviation about the mean provided a measure of overall modeling uncertainty. Process analysis was applied to examine the contribution of the individual processes to the species conservation equation. The process analysis results indicated that the two modeling systems arrive at similar solutions by very different means. Transport rates are faster and exhibit greater fluctuations in the CMAQ cases than in the CALGRID cases, which lead to different placement of the urban ozone plumes. The CALGRID cases, which rely on the SAPRC97 chemical mechanism, exhibited a greater diurnal production/loss cycle of ozone concentrations per hour compared to either the RADM2 or CBIV chemical mechanisms in the CMAQ cases. These results demonstrate the need for specialized process field measurements to confirm whether we are modeling ozone with valid processes.

  1. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  2. Orientation distribution and process modeling of thermotropic liquid crystalline copolyester (TLCP) injection-moldings

    NASA Astrophysics Data System (ADS)

    Bubeck, Robert; Fang, Jun; Burghardt, Wesley; Burgard, Susan; Fischer, Daniel

    2009-03-01

    The influence of melt processing conditions upon mechanical properties and degrees of compound molecular orientation have been thoroughly studied for a series of well-defined injection molded samples fabricated from VECTRA (TM) A950 and 4,4'-dihydroxy-a-methylstilbene TLCPs. Fracture and tensile data were correlated with processing conditions, orientation, and molecular weight. Mechanical properties for both TLCPs were found to follow a ``universal'' Anisotropy Factor (AF) associated with the bimodal orientation states in the plaques determined from 2-D WAXS. Surface orientations were globally surveyed using Attenuated Total Reflectance -- Fourier Transform Infrared (ATR-FTIR) spectroscopy and C K edge Near-Edge X-ray Absorption Fine Structure (NEXAFS). The results derived from the two spectroscopy techniques confirmed each other well. These results along with those from 2-D WAXS in transmission were compared with the results of process modeling using a commercial program, MOLDFLOW(TM). The agreement between model predictions and the measured orientation states was gratifyingly good.

  3. Case Studies in Modelling, Control in Food Processes.

    PubMed

    Glassey, J; Barone, A; Montague, G A; Sabou, V

    This chapter discusses the importance of modelling and control in increasing food process efficiency and ensuring product quality. Various approaches to both modelling and control in food processing are set in the context of the specific challenges in this industrial sector and latest developments in each area are discussed. Three industrial case studies are used to demonstrate the benefits of advanced measurement, modelling and control in food processes. The first case study illustrates the use of knowledge elicitation from expert operators in the process for the manufacture of potato chips (French fries) and the consequent improvements in process control to increase the consistency of the resulting product. The second case study highlights the economic benefits of tighter control of an important process parameter, moisture content, in potato crisp (chips) manufacture. The final case study describes the use of NIR spectroscopy in ensuring effective mixing of dry multicomponent mixtures and pastes. Practical implementation tips and infrastructure requirements are also discussed.

  4. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  5. Principal process analysis of biological models.

    PubMed

    Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc

    2018-06-14

    Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.

  6. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  7. Toward Process-resolving Synthesis and Prediction of Arctic Climate Change Using the Regional Arctic System Model

    NASA Astrophysics Data System (ADS)

    Maslowski, W.

    2017-12-01

    The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.

  8. A stochastic automata network for earthquake simulation and hazard estimation

    NASA Astrophysics Data System (ADS)

    Belubekian, Maya Ernest

    1998-11-01

    This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.

  9. A modeling understanding on the phosphorous removal performances of A2O and reversed A2O processes in a full-scale wastewater treatment plant.

    PubMed

    Xie, Wen-Ming; Zeng, Raymond J; Li, Wen-Wei; Wang, Guo-Xiang; Zhang, Li-Min

    2018-05-31

    Reversed A 2 O process (anoxic-anaerobic-aerobic) and conventional A 2 O process (anaerobic-anoxic-aerobic) are widely used in many wastewater treatment plants (WWTPs) in Asia. However, at present, there are still no consistent results to figure out which process has better total phosphorous (TP) removal performance and the mechanism for this difference was not clear yet. In this study, the treatment performances of both processes were compared in the same full-scale WWTP and the TP removal dynamics was analyzed by a modeling method. The treatment performance of full-scale WWTP showed the TP removal efficiency of the reversed A 2 O process was more efficient than in the conventional A 2 O process. The modeling results further reveal that the TP removal depends highly on the concentration and composition of influent COD. It had more efficient TP removal than the conventional A 2 O process only under conditions of sufficient influent COD and high fermentation products content. This study may lay a foundation for appropriate selection and optimization of treatment processes to suit practical wastewater properties.

  10. Sliding mode control: an approach to regulate nonlinear chemical processes

    PubMed

    Camacho; Smith

    2000-01-01

    A new approach for the design of sliding mode controllers based on a first-order-plus-deadtime model of the process, is developed. This approach results in a fixed structure controller with a set of tuning equations as a function of the characteristic parameters of the model. The controller performance is judged by simulations on two nonlinear chemical processes.

  11. Rolling Process Modeling Report. Finite-Element Model Validation and Parametric Study on various Rolling Process parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soulami, Ayoub; Lavender, Curt A.; Paxton, Dean M.

    2015-06-15

    Pacific Northwest National Laboratory (PNNL) has been investigating manufacturing processes for the uranium-10% molybdenum alloy plate-type fuel for high-performance research reactors in the United States. This work supports the U.S. Department of Energy National Nuclear Security Administration’s Office of Material Management and Minimization Reactor Conversion Program. This report documents modeling results of PNNL’s efforts to perform finite-element simulations to predict roll-separating forces for various rolling mill geometries for PNNL, Babcock & Wilcox Co., Y-12 National Security Complex, Los Alamos National Laboratory, and Idaho National Laboratory. The model developed and presented in a previous report has been subjected to further validationmore » study using new sets of experimental data generated from a rolling mill at PNNL. Simulation results of both hot rolling and cold rolling of uranium-10% molybdenum coupons have been compared with experimental results. The model was used to predict roll-separating forces at different temperatures and reductions for five rolling mills within the National Nuclear Security Administration Fuel Fabrication Capability project. This report also presents initial results of a finite-element model microstructure-based approach to study the surface roughness at the interface between zirconium and uranium-10% molybdenum.« less

  12. Nonequivalence of updating rules in evolutionary games under high mutation rates.

    PubMed

    Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  13. Nonequivalence of updating rules in evolutionary games under high mutation rates

    NASA Astrophysics Data System (ADS)

    Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  14. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  15. The Structure of Working Memory in Young Children and Its Relation to Intelligence

    PubMed Central

    Gray, Shelley; Green, Samuel; Alt, Mary; Hogan, Tiffany P.; Kuo, Trudy; Brinkley, Shara; Cowan, Nelson

    2016-01-01

    This study investigated the structure of working memory in young school-age children by testing the fit of three competing theoretical models using a wide variety of tasks. The best fitting models were then used to assess the relationship between working memory and nonverbal measures of fluid reasoning (Gf) and visual processing (Gv) intelligence. One hundred sixty-eight English-speaking 7–9 year olds with typical development, from three states, participated. Results showed that Cowan’s three-factor embedded processes model fit the data slightly better than Baddeley and Hitch’s (1974) three-factor model (specified according to Baddeley, 1986) and decisively better than Baddeley’s (2000) four-factor model that included an episodic buffer. The focus of attention factor in Cowan’s model was a significant predictor of Gf and Gv. The results suggest that the focus of attention, rather than storage, drives the relationship between working memory, Gf, and Gv in young school-age children. Our results do not rule out the Baddeley and Hitch model, but they place constraints on both it and Cowan’s model. A common attentional component is needed for feature binding, running digit span, and visual short-term memory tasks; phonological storage is separate, as is a component of central executive processing involved in task manipulation. The results contribute to a zeitgeist in which working memory models are coming together on common ground (cf. Cowan, Saults, & Blume, 2014; Hu, Allen, Baddeley, & Hitch, 2016). PMID:27990060

  16. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  17. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  18. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  19. Generalised additive modelling approach to the fermentation process of glutamate.

    PubMed

    Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping

    2011-03-01

    In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  20. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  1. Influence of forest cover changes on regional weather conditions: estimations using the mesoscale model COSMO

    NASA Astrophysics Data System (ADS)

    Olchev, A. V.; Rozinkina, I. A.; Kuzmina, E. V.; Nikitin, M. A.; Rivin, G. S.

    2018-01-01

    This modeling study intends to estimate the possible influence of forest cover change on regional weather conditions using the non-hydrostatic model COSMO. The central part of the East European Plain was selected as the ‘model region’ for the study. The results of numerical experiments conducted for the warm period of 2010 for the modeling domain covering almost the whole East European Plain showed that deforestation and afforestation processes within the selected model region of the area about 105 km2 can lead to significant changes in regional weather conditions. The deforestation processes have resulted in an increase of the air temperature and a reduction in the amount of precipitation. The afforestation processes can produce the opposite effects, as manifested in decreased air temperature and increased precipitation. Whereas a change of the air temperature is observed mainly inside of the model region, the changes of the precipitation are evident within the entire East European Plain, even in regions situated far away from the external boundaries of the model region.

  2. Recent Results From MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, Cheryl

    The MINERvA detector is situated in Fermilab's NuMI beam, which provides neutrinos and antineutrinos in the 1-20 GeV range. It is designed to make precision cross-section measurements for scattering processes on various nuclei. These proceedings summarize the differential cross-section distributions measured for several different processes. Comparison of these with various models hints at additional nuclear effects not included in common simulations. These results will help constrain generators' nuclear models and reduce systematic uncertainties on their predictions. An accurate cross-section model, with minimal uncertainties, is vital to oscillation experiments.

  3. Analysis of the lightning-attractive radius for wind turbines considering the developing process of positive attachment leader

    NASA Astrophysics Data System (ADS)

    Yang, Ning; Zhang, Qilin; Hou, Wenhao; Wen, Ying

    2017-03-01

    In this paper, we have presented the upward leader propagation model, considering the transition of stream leader process by the finite element method and analyzing the inception and subsequent physical processes of upward leader and the attractive radius for large wind turbines. For validating our model, the comparison of simulated results with the optically high-speed video observation shows that the model can predict an accepted result of upward leader from a 163 m tall tower, the simulated upward leader velocity and length before final jump are 2.3 × 105 m/s and 187.67 m presented by Warner (2010), which are very similar to the observed results of 2.8 × 105 m/s and 184 m, respectively. At the same time, we find that the assumed constant speed ratio of downward/upward leader is improper and cannot accurately predict the attractive radius by lightning strike. Also, the simulated results are compared with the widely used EGM (electro geometric model), and it is found that the EGM has an obvious underestimation of attractive radius more than 50%.

  4. Development and testing of a fast conceptual river water quality model.

    PubMed

    Keupers, Ingrid; Willems, Patrick

    2017-04-15

    Modern, model based river quality management strongly relies on river water quality models to simulate the temporal and spatial evolution of pollutant concentrations in the water body. Such models are typically constructed by extending detailed hydrodynamic models with a component describing the advection-diffusion and water quality transformation processes in a detailed, physically based way. This approach is too computational time demanding, especially when simulating long time periods that are needed for statistical analysis of the results or when model sensitivity analysis, calibration and validation require a large number of model runs. To overcome this problem, a structure identification method to set up a conceptual river water quality model has been developed. Instead of calculating the water quality concentrations at each water level and discharge node, the river branch is divided into conceptual reservoirs based on user information such as location of interest and boundary inputs. These reservoirs are modelled as Plug Flow Reactor (PFR) and Continuously Stirred Tank Reactor (CSTR) to describe advection and diffusion processes. The same water quality transformation processes as in the detailed models are considered but with adjusted residence times based on the hydrodynamic simulation results and calibrated to the detailed water quality simulation results. The developed approach allows for a much faster calculation time (factor 10 5 ) without significant loss of accuracy, making it feasible to perform time demanding scenario runs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent

    PubMed Central

    Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M.

    2017-01-01

    The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients. PMID:28744212

  6. An Observation Analysis Tool for time-series analysis and sensor management in the FREEWAT GIS environment for water resources management

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo

    2017-04-01

    In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.

  7. Direct Air Capture of CO2 with an Amine Resin: A Molecular Modeling Study of the CO2 Capturing Process

    PubMed Central

    2017-01-01

    Several reactions, known from other amine systems for CO2 capture, have been proposed for Lewatit R VP OC 1065. The aim of this molecular modeling study is to elucidate the CO2 capture process: the physisorption process prior to the CO2-capture and the reactions. Molecular modeling yields that the resin has a structure with benzyl amine groups on alternating positions in close vicinity of each other. Based on this structure, the preferred adsorption mode of CO2 and H2O was established. Next, using standard Density Functional Theory two catalytic reactions responsible for the actual CO2 capture were identified: direct amine and amine-H2O catalyzed formation of carbamic acid. The latter is a new type of catalysis. Other reactions are unlikely. Quantitative verification of the molecular modeling results with known experimental CO2 adsorption isotherms, applying a dual site Langmuir adsorption isotherm model, further supports all results of this molecular modeling study. PMID:29142339

  8. Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent.

    PubMed

    Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M

    2017-01-01

    The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients.

  9. Drift diffusion model of reward and punishment learning in schizophrenia: Modeling and experimental data.

    PubMed

    Moustafa, Ahmed A; Kéri, Szabolcs; Somlai, Zsuzsanna; Balsdon, Tarryn; Frydecka, Dorota; Misiak, Blazej; White, Corey

    2015-09-15

    In this study, we tested reward- and punishment learning performance using a probabilistic classification learning task in patients with schizophrenia (n=37) and healthy controls (n=48). We also fit subjects' data using a Drift Diffusion Model (DDM) of simple decisions to investigate which components of the decision process differ between patients and controls. Modeling results show between-group differences in multiple components of the decision process. Specifically, patients had slower motor/encoding time, higher response caution (favoring accuracy over speed), and a deficit in classification learning for punishment, but not reward, trials. The results suggest that patients with schizophrenia adopt a compensatory strategy of favoring accuracy over speed to improve performance, yet still show signs of a deficit in learning based on negative feedback. Our data highlights the importance of applying fitting models (particularly drift diffusion models) to behavioral data. The implications of these findings are discussed relative to theories of schizophrenia and cognitive processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Development and evaluation of spatial point process models for epidermal nerve fibers.

    PubMed

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Changing ecophysiological processes and carbon budget in East Asian ecosystems under near-future changes in climate: implications for long-term monitoring from a process-based model.

    PubMed

    Ito, Akihiko

    2010-07-01

    Using a process-based model, I assessed how ecophysiological processes would respond to near-future global changes predicted by coupled atmosphere-ocean climate models. An ecosystem model, Vegetation Integrative SImulator for Trace gases (VISIT), was applied to four sites in East Asia (different types of forest in Takayama, Tomakomai, and Fujiyoshida, Japan, and an Alpine grassland in Qinghai, China) where observational flux data are available for model calibration. The climate models predicted +1-3 degrees C warming and slight change in annual precipitation by 2050 as a result of an increase in atmospheric CO2. Gross primary production (GPP) was estimated to increase substantially at each site because of improved efficiency in the use of water and radiation. Although increased respiration partly offset the GPP increase, the simulation showed that these ecosystems would act as net carbon sinks independent of disturbance-induced uptake for recovery. However, the carbon budget response relied strongly on nitrogen availability, such that photosynthetic down-regulation resulting from leaf nitrogen dilution largely decreased GPP. In relation to long-term monitoring, these results indicate that the impacts of global warming may be more evident in gross fluxes (e.g., photosynthesis and respiration) than in the net CO2 budget, because changes in these fluxes offset each other.

  12. Experimental rill erosion research vs. model concepts - quantification of the hydraulic and erosional efficiency of rills

    NASA Astrophysics Data System (ADS)

    Wirtz, Stefan

    2014-05-01

    In soil erosion research, rills are believed to be one of the most efficient forms. They act as preferential flow paths for overland flow and hence become the most efficient sediment sources in a catchment. However their fraction of the overall detachment in a certain area compared to other soil erosion processes is contentious. The requirement for handling this subject is the standardization of the used measurement methods for rill erosion quantification. Only by using a standardized method, the results of different studies become comparable and can be synthesized to one overall statement. In rill erosion research, such a standardized field method was missing until now. Hence, the first aim of this study is to present an experimental setup that enables us to obtain comparable data about process dynamics in eroding rills under standardized conditions in the field. Using this rill experiment, the runoff efficiency of rills (second aim) and the fraction of rill erosion on total soil loss (third aim) in a catchment are quantified. The erosion rate [g m-2] in the rills is between twenty- and sixty-times higher compared to the interrill areas, the specific discharge [L s-1 m-2] in the rills is about 2000 times higher. The identification and quantification of different rill erosion processes are the fourth aim within this project. Gravitative processes like side wall failure, headcut- and knickpoint retreat provide up to 94 % of the detached sediment quantity. In soil erosion models, only the incision into the rill's bottom is considered, hence the modelled results are unsatisfactorily. Due to the low quality of soil erosion model results, the fifth aim of the study is to review two physical basic assumptions using the rill experiments. Contrasting with the model assumptions, there is no clear linear correlation between any hydraulic parameter and the detachment rate and the transport rate is capable of exceeding the transport capacity. In conclusion, the results clearly show the need of experimental field data obtained under conditions as close as possible to reality. This is the only way to improve the fundamental knowledge about the function and the impact of the different processes in rill erosion. A better understanding of the process combinations is a fundamental request for developing a really functioning soil erosion model. In such a model, spatial and temporal variability as well as the combination of different sub-processes must be considered. Regarding the experimental results of this study, the simulation of natural processes using simple, static mathematical equations seems not to be possible.

  13. Oligomer formation in the troposphere: from experimental knowledge to 3-D modeling

    NASA Astrophysics Data System (ADS)

    Lemaire, V.; Coll, I.; Couvidat, F.; Mouchel-Vallon, C.; Seigneur, C.; Siour, G.

    2015-10-01

    The organic fraction of atmospheric aerosols has proven to be a critical element of air quality and climate issues. However, its composition and the aging processes it undergoes remain insufficiently understood. This work builds on laboratory knowledge to simulate the formation of oligomers from biogenic secondary organic aerosol (BSOA) in the troposphere at the continental scale. We compare the results of two different modeling approaches, a 1st-order kinetic process and a pH-dependent parameterization, both implemented in the CHIMERE air quality model (AQM), to simulate the spatial and temporal distribution of oligomerized SOA over western Europe. Our results show that there is a strong dependence of the results on the selected modeling approach: while the irreversible kinetic process leads to the oligomerization of about 50 % of the total BSOA mass, the pH-dependent approach shows a broader range of impacts, with a strong dependency on environmental parameters (pH and nature of aerosol) and the possibility for the process to be reversible. In parallel, we investigated the sensitivity of each modeling approach to the representation of SOA precursor solubility (Henry's law constant values). Finally, the pros and cons of each approach for the representation of SOA aging are discussed and recommendations are provided to improve current representations of oligomer formation in AQMs.

  14. A Study of Cavitation-Ignition Bubble Combustion

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Jacqmin, David A.

    2005-01-01

    We present the results of an experimental and computational study of the physics and chemistry of cavitation-ignition bubble combustion (CIBC), a process that occurs when combustible gaseous mixtures are ignited by the high temperatures found inside a rapidly collapsing bubble. The CIBC process was modeled using a time-dependent compressible fluid-dynamics code that includes finite-rate chemistry. The model predicts that gas-phase reactions within the bubble produce CO and other gaseous by-products of combustion. In addition, heat and mechanical energy release through a bubble volume-expansion phase are also predicted by the model. We experimentally demonstrate the CIBC process using an ultrasonically excited cavitation flow reactor with various hydrocarbon-air mixtures in liquid water. Low concentrations (< 160 ppm) of carbon monoxide (CO) emissions from the ultrasonic reactor were measured, and found to be proportional to the acoustic excitation power. The results of the model were consistent with the measured experimental results. Based on the experimental findings, the computational model, and previous reports of the "micro-diesel effect" in industrial hydraulic systems, we conclude that CIBC is indeed possible and exists in ultrasonically- and hydrodynamically-induced cavitation. Finally, estimates of the utility of CIBC process as a means of powering an idealized heat engine are also presented.

  15. Magmatism in Lithosphere Delamination process inferred from numerical models

    NASA Astrophysics Data System (ADS)

    Göǧüş, Oǧuz H.; Ueda, Kosuke; Gerya, Taras

    2017-04-01

    The peel away of the oceanic/continental slab from the overlying orogenic crust has been suggested as a ubiquitous process in the Alpine-Mediterranean orogenic region (e.g. Carpathians, Apennines, Betics and Anatolia). The process is defined as lithospheric delamination where a slab removal/peel back may allow for the gradual uprising of sub-lithospheric mantle, resulting in high heat flow, transient surface uplift/subsidence and varying types of magma production. Geodynamical modeling studies have adressed the surface response to the delamination in the context of regional tectonic processes and explored wide range of controlling parameters in pre-syn and post collisional stages. However, the amount and styles of melt production in the mantle (e.g. decompression melting, wet melting in the wedge) and the resulting magmatism due to the lithosphere delamination remains uncertain. In this work, by using thermomechanical numerical experiments, designed in the configuration of subduction to collision, we investigated how melting in the mantle develops in the course of delamination. Furthermore, model results are used to decipher the distribution of volumetric melt production, melt extraction and the source of melt and the style of magmatism (e.g. igneous vs. volcanic). The model results suggest that a broad region of decompression melting occurs under the crust, mixing with the melting of the hydrated mantle derived by the delaminating/subducting slab. Depending on the age of the ocean slab, plate convergence velocity and the mantle temperature, the melt production and crust magmatism may concentrate under the mantle wedge or in the far side of the delamination front (where the subduction begins). The slab break-off usually occurs in the terminal stages of the delamination process and it may effectively control the location of the magmatism in the crust. The model results are reconciled with the temporal and spatial distribution of orogenic vs. anorogenic magmatism in the Mediterranean region in which the latter may have developed due to the delamination process.

  16. A neural model of figure-ground organization.

    PubMed

    Craft, Edward; Schütze, Hartmut; Niebur, Ernst; von der Heydt, Rüdiger

    2007-06-01

    Psychophysical studies suggest that figure-ground organization is a largely autonomous process that guides--and thus precedes--allocation of attention and object recognition. The discovery of border-ownership representation in single neurons of early visual cortex has confirmed this view. Recent theoretical studies have demonstrated that border-ownership assignment can be modeled as a process of self-organization by lateral interactions within V2 cortex. However, the mechanism proposed relies on propagation of signals through horizontal fibers, which would result in increasing delays of the border-ownership signal with increasing size of the visual stimulus, in contradiction with experimental findings. It also remains unclear how the resulting border-ownership representation would interact with attention mechanisms to guide further processing. Here we present a model of border-ownership coding based on dedicated neural circuits for contour grouping that produce border-ownership assignment and also provide handles for mechanisms of selective attention. The results are consistent with neurophysiological and psychophysical findings. The model makes predictions about the hypothetical grouping circuits and the role of feedback between cortical areas.

  17. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    NASA Astrophysics Data System (ADS)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  18. An improvement in the calculation of the efficiency of oxidative phosphorylation and rate of energy dissipation in mitochondria

    NASA Astrophysics Data System (ADS)

    Ghafuri, Mohazabeh; Golfar, Bahareh; Nosrati, Mohsen; Hoseinkhani, Saman

    2014-12-01

    The process of ATP production is one of the most vital processes in living cells which happens with a high efficiency. Thermodynamic evaluation of this process and the factors involved in oxidative phosphorylation can provide a valuable guide for increasing the energy production efficiency in research and industry. Although energy transduction has been studied qualitatively in several researches, there are only few brief reviews based on mathematical models on this subject. In our previous work, we suggested a mathematical model for ATP production based on non-equilibrium thermodynamic principles. In the present study, based on the new discoveries on the respiratory chain of animal mitochondria, Golfar's model has been used to generate improved results for the efficiency of oxidative phosphorylation and the rate of energy loss. The results calculated from the modified coefficients for the proton pumps of the respiratory chain enzymes are closer to the experimental results and validate the model.

  19. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  20. Multiple data sets and modelling choices in a comparative LCA of disposable beverage cups.

    PubMed

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2014-10-01

    This study used multiple data sets and modelling choices in an environmental life cycle assessment (LCA) to compare typical disposable beverage cups made from polystyrene (PS), polylactic acid (PLA; bioplastic) and paper lined with bioplastic (biopaper). Incineration and recycling were considered as waste processing options, and for the PLA and biopaper cup also composting and anaerobic digestion. Multiple data sets and modelling choices were systematically used to calculate average results and the spread in results for each disposable cup in eleven impact categories. The LCA results of all combinations of data sets and modelling choices consistently identify three processes that dominate the environmental impact: (1) production of the cup's basic material (PS, PLA, biopaper), (2) cup manufacturing, and (3) waste processing. The large spread in results for impact categories strongly overlaps among the cups, however, and therefore does not allow a preference for one type of cup material. Comparison of the individual waste treatment options suggests some cautious preferences. The average waste treatment results indicate that recycling is the preferred option for PLA cups, followed by anaerobic digestion and incineration. Recycling is slightly preferred over incineration for the biopaper cups. There is no preferred waste treatment option for the PS cups. Taking into account the spread in waste treatment results for all cups, however, none of these preferences for waste processing options can be justified. The only exception is composting, which is least preferred for both PLA and biopaper cups. Our study illustrates that using multiple data sets and modelling choices can lead to considerable spread in LCA results. This makes comparing products more complex, but the outcomes more robust. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Prediction of hemoglobin in blood donors using a latent class mixed-effects transition model.

    PubMed

    Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Rizopoulos, Dimitris; Lesaffre, Emmanuel

    2016-02-20

    Blood donors experience a temporary reduction in their hemoglobin (Hb) value after donation. At each visit, the Hb value is measured, and a too low Hb value leads to a deferral for donation. Because of the recovery process after each donation as well as state dependence and unobserved heterogeneity, longitudinal data of Hb values of blood donors provide unique statistical challenges. To estimate the shape and duration of the recovery process and to predict future Hb values, we employed three models for the Hb value: (i) a mixed-effects models; (ii) a latent-class mixed-effects model; and (iii) a latent-class mixed-effects transition model. In each model, a flexible function was used to model the recovery process after donation. The latent classes identify groups of donors with fast or slow recovery times and donors whose recovery time increases with the number of donations. The transition effect accounts for possible state dependence in the observed data. All models were estimated in a Bayesian way, using data of new entrant donors from the Donor InSight study. Informative priors were used for parameters of the recovery process that were not identified using the observed data, based on results from the clinical literature. The results show that the latent-class mixed-effects transition model fits the data best, which illustrates the importance of modeling state dependence, unobserved heterogeneity, and the recovery process after donation. The estimated recovery time is much longer than the current minimum interval between donations, suggesting that an increase of this interval may be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Process-based, morphodynamic hindcast of decadal deposition patterns in San Pablo Bay, California, 1856-1887

    USGS Publications Warehouse

    van der Wegen, M.; Jaffe, B.E.; Roelvink, J.A.

    2011-01-01

    This study investigates the possibility of hindcasting-observed decadal-scale morphologic change in San Pablo Bay, a subembayment of the San Francisco Estuary, California, USA, by means of a 3-D numerical model (Delft3D). The hindcast period, 1856-1887, is characterized by upstream hydraulic mining that resulted in a high sediment input to the estuary. The model includes wind waves, salt water and fresh water interactions, and graded sediment transport, among others. Simplified initial conditions and hydrodynamic forcing were necessary because detailed historic descriptions were lacking. Model results show significant skill. The river discharge and sediment concentration have a strong positive influence on deposition volumes. Waves decrease deposition rates and have, together with tidal movement, the greatest effect on sediment distribution within San Pablo Bay. The applied process-based (or reductionist) modeling approach is valuable once reasonable values for model parameters and hydrodynamic forcing are obtained. Sensitivity analysis reveals the dominant forcing of the system and suggests that the model planform plays a dominant role in the morphodynamic development. A detailed physical explanation of the model outcomes is difficult because of the high nonlinearity of the processes. Process formulation refinement, a more detailed description of the forcing, or further model parameter variations may lead to an enhanced model performance, albeit to a limited extent. The approach potentially provides a sound basis for prediction of future developments. Parallel use of highly schematized box models and a process-based approach as described in the present work is probably the most valuable method to assess decadal morphodynamic development. Copyright ?? 2011 by the American Geophysical Union.

  3. Application of bayesian networks to real-time flood risk estimation

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Blasco, G.

    2003-04-01

    This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models

  4. Exploring the Carbon Simmering Phase: Reaction Rates, Mixing, and the Convective Urca Process

    NASA Astrophysics Data System (ADS)

    Schwab, Josiah; Martínez-Rodríguez, Héctor; Piro, Anthony L.; Badenes, Carles

    2017-12-01

    The neutron excess at the time of explosion provides a powerful discriminant among models of Type Ia supernovae. Recent calculations of the carbon simmering phase in single degenerate progenitors have disagreed about the final neutron excess. We find that the treatment of mixing in convection zones likely contributes to the difference. We demonstrate that in Modules for Experiments in Stellar Astrophysics models, heating from exothermic weak reactions plays a significant role in raising the temperature of the white dwarf. This emphasizes the important role that the convective Urca process plays during simmering. We briefly summarize the shortcomings of current models during this phase. Ultimately, we do not pinpoint the difference between the results reported in the literature, but show that the results are consistent with different net energetics of the convective Urca process. This problem serves as an important motivation for the development of models of the convective Urca process suitable for incorporation into stellar evolution codes.

  5. Exposing earth surface process model simulations to a large audience

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical modeling as a tool to create understanding of environmental processes.

  6. Modeling of AA5083 Material-Microstructure Evolution During Butt Friction-Stir Welding

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Yalavarthy, H. V.; He, T.; Yen, C.-F.; Cheeseman, B. A.

    2010-07-01

    A concise yet a fairly comprehensive overview of the friction stir welding (FSW) process is provided. This is followed by a computational investigation in which FSW behavior of a prototypical solution-strengthened and strain-hardened aluminum alloy, AA5083-H131, is modeled using a fully coupled thermo-mechanical finite-element procedure developed in our prior study. Particular attention is given to proper modeling of the welding work-piece material behavior during the FSW process. Specifically, competition and interactions between plastic-deformation and dynamic-recrystallization processes are considered to properly account for the material-microstructure evolution in the weld nugget zone. The results showed that with proper modeling of the material behavior under high-temperature/severe-plastic-deformation conditions, significantly improved agreement can be attained between the computed and measured post-FSW residual-stress and material-strength distribution results.

  7. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  8. A Bayesian model for visual space perception

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1972-01-01

    A model for visual space perception is proposed that contains desirable features in the theories of Gibson and Brunswik. This model is a Bayesian processor of proximal stimuli which contains three important elements: an internal model of the Markov process describing the knowledge of the distal world, the a priori distribution of the state of the Markov process, and an internal model relating state to proximal stimuli. The universality of the model is discussed and it is compared with signal detection theory models. Experimental results of Kinchla are used as a special case.

  9. Modeling interdependencies between business and communication processes in hospitals.

    PubMed

    Brigl, Birgit; Wendt, Thomas; Winter, Alfred

    2003-01-01

    The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.

  10. Toward a Model of Human Information Processing for Decision-Making and Skill Acquisition in Laparoscopic Colorectal Surgery.

    PubMed

    White, Eoin J; McMahon, Muireann; Walsh, Michael T; Coffey, J Calvin; O Sullivan, Leonard

    To create a human information-processing model for laparoscopic surgery based on already established literature and primary research to enhance laparoscopic surgical education in this context. We reviewed the literature for information-processing models most relevant to laparoscopic surgery. Our review highlighted the necessity for a model that accounts for dynamic environments, perception, allocation of attention resources between the actions of both hands of an operator, and skill acquisition and retention. The results of the literature review were augmented through intraoperative observations of 7 colorectal surgical procedures, supported by laparoscopic video analysis of 12 colorectal procedures. The Wickens human information-processing model was selected as the most relevant theoretical model to which we make adaptions for this specific application. We expanded the perception subsystem of the model to involve all aspects of perception during laparoscopic surgery. We extended the decision-making system to include dynamic decision-making to account for case/patient-specific and surgeon-specific deviations. The response subsystem now includes dual-task performance and nontechnical skills, such as intraoperative communication. The memory subsystem is expanded to include skill acquisition and retention. Surgical decision-making during laparoscopic surgery is the result of a highly complex series of processes influenced not only by the operator's knowledge, but also patient anatomy and interaction with the surgical team. Newer developments in simulation-based education must focus on the theoretically supported elements and events that underpin skill acquisition and affect the cognitive abilities of novice surgeons. The proposed human information-processing model builds on established literature regarding information processing, accounting for a dynamic environment of laparoscopic surgery. This revised model may be used as a foundation for a model describing robotic surgery. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  11. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors.

    PubMed

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-08-31

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction.

  12. Structural Stability Monitoring of a Physical Model Test on an Underground Cavern Group during Deep Excavations Using FBG Sensors

    PubMed Central

    Li, Yong; Wang, Hanpeng; Zhu, Weishen; Li, Shucai; Liu, Jian

    2015-01-01

    Fiber Bragg Grating (FBG) sensors are comprehensively recognized as a structural stability monitoring device for all kinds of geo-materials by either embedding into or bonding onto the structural entities. The physical model in geotechnical engineering, which could accurately simulate the construction processes and the effects on the stability of underground caverns on the basis of satisfying the similarity principles, is an actual physical entity. Using a physical model test of underground caverns in Shuangjiangkou Hydropower Station, FBG sensors were used to determine how to model the small displacements of some key monitoring points in the large-scale physical model during excavation. In the process of building the test specimen, it is most successful to embed FBG sensors in the physical model through making an opening and adding some quick-set silicon. The experimental results show that the FBG sensor has higher measuring accuracy than other conventional sensors like electrical resistance strain gages and extensometers. The experimental results are also in good agreement with the numerical simulation results. In conclusion, FBG sensors could effectively measure small displacements of monitoring points in the whole process of the physical model test. The experimental results reveal the deformation and failure characteristics of the surrounding rock mass and make some guidance for the in situ engineering construction. PMID:26404287

  13. Learning while (re)configuring: Business model innovation processes in established firms.

    PubMed

    Berends, Hans; Smits, Armand; Reymen, Isabelle; Podoynitsyna, Ksenia

    2016-08-01

    This study addresses the question of how established organizations develop new business models over time, using a process research approach to trace how four business model innovation trajectories unfold. With organizational learning as analytical lens, we discern two process patterns: "drifting" starts with an emphasis on experiential learning and shifts later to cognitive search; "leaping," in contrast, starts with an emphasis on cognitive search and shifts later to experiential learning. Both drifting and leaping can result in radical business model innovations, while their occurrence depends on whether a new business model takes off from an existing model and when it goes into operation. We discuss the implications of these findings for theory on business models and organizational learning.

  14. Learning while (re)configuring: Business model innovation processes in established firms

    PubMed Central

    Berends, Hans; Smits, Armand; Reymen, Isabelle; Podoynitsyna, Ksenia

    2016-01-01

    This study addresses the question of how established organizations develop new business models over time, using a process research approach to trace how four business model innovation trajectories unfold. With organizational learning as analytical lens, we discern two process patterns: “drifting” starts with an emphasis on experiential learning and shifts later to cognitive search; “leaping,” in contrast, starts with an emphasis on cognitive search and shifts later to experiential learning. Both drifting and leaping can result in radical business model innovations, while their occurrence depends on whether a new business model takes off from an existing model and when it goes into operation. We discuss the implications of these findings for theory on business models and organizational learning. PMID:28596704

  15. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  16. A Point-process Response Model for Spike Trains from Single Neurons in Neural Circuits under Optogenetic Stimulation

    PubMed Central

    Luo, X.; Gee, S.; Sohal, V.; Small, D.

    2015-01-01

    Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high frequency point process (neuronal spikes) while the input is another high frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, Point-process Responses for Optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the- curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. PMID:26411923

  17. Modeling the defrost process in complex geometries - Part 1: Development of a one-dimensional defrost model

    NASA Astrophysics Data System (ADS)

    van Buren, Simon; Hertle, Ellen; Figueiredo, Patric; Kneer, Reinhold; Rohlfs, Wilko

    2017-11-01

    Frost formation is a common, often undesired phenomenon in heat exchanges such as air coolers. Thus, air coolers have to be defrosted periodically, causing significant energy consumption. For the design and optimization, prediction of defrosting by a CFD tool is desired. This paper presents a one-dimensional transient model approach suitable to be used as a zero-dimensional wall-function in CFD for modeling the defrost process at the fin and tube interfaces. In accordance to previous work a multi stage defrost model is introduced (e.g. [1, 2]). In the first instance the multi stage model is implemented and validated using MATLAB. The defrost process of a one-dimensional frost segment is investigated. Fixed boundary conditions are provided at the frost interfaces. The simulation results verify the plausibility of the designed model. The evaluation of the simulated defrost process shows the expected convergent behavior of the three-stage sequence.

  18. Simulation and prediction of the thuringiensin abiotic degradation processes in aqueous solution by a radius basis function neural network model.

    PubMed

    Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen

    2013-04-01

    The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Microphysics, Radiation and Surface Processes in the Goddard Cumulus Ensemble (GCE) Model

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Simpson, J.; Baker, D.; Braun, S.; Chou, M.-D.; Ferrier, B.; Johnson, D.; Khain, A.; Lang, S.; Lynn, B.

    2001-01-01

    The response of cloud systems to their environment is an important link in a chain of processes responsible for monsoons, frontal depression, El Nino Southern Oscillation (ENSO) episodes and other climate variations (e.g., 30-60 day intra-seasonal oscillations). Numerical models of cloud properties provide essential insights into the interactions of clouds with each other, with their surroundings, and with land and ocean surfaces. Significant advances are currently being made in the modeling of rainfall and rain-related cloud processes, ranging in scales from the very small up to the simulation of an extensive population of raining cumulus clouds in a tropical- or midlatitude-storm environment. The Goddard Cumulus Ensemble (GCE) model is a multi-dimensional nonhydrostatic dynamic/microphysical cloud resolving model. It has been used to simulate many different mesoscale convective systems that occurred in various geographic locations. In this paper, recent GCE model improvements (microphysics, radiation and surface processes) will be described as well as their impact on the development of precipitation events from various geographic locations. The performance of these new physical processes will be examined by comparing the model results with observations. In addition, the explicit interactive processes between cloud, radiation and surface processes will be discussed.

  20. A first-principle model of 300 mm Czochralski single-crystal Si production process for predicting crystal radius and crystal growth rate

    NASA Astrophysics Data System (ADS)

    Zheng, Zhongchao; Seto, Tatsuru; Kim, Sanghong; Kano, Manabu; Fujiwara, Toshiyuki; Mizuta, Masahiko; Hasebe, Shinji

    2018-06-01

    The Czochralski (CZ) process is the dominant method for manufacturing large cylindrical single-crystal ingots for the electronics industry. Although many models and control methods for the CZ process have been proposed, they were only tested with small equipment and only a few industrial application were reported. In this research, we constructed a first-principle model for controlling industrial CZ processes that produce 300 mm single-crystal silicon ingots. The developed model, which consists of energy, mass balance, hydrodynamic, and geometrical equations, calculates the crystal radius and the crystal growth rate as output variables by using the heater input, the crystal pulling rate, and the crucible rise rate as input variables. To improve accuracy, we modeled the CZ process by considering factors such as changes in the positions of the crucible and the melt level. The model was validated with the operation data from an industrial 300 mm CZ process. We compared the calculated and actual values of the crystal radius and the crystal growth rate, and the results demonstrated that the developed model simulated the industrial process with high accuracy.

  1. Videogame Construction by Engineering Students for Understanding Modelling Processes: The Case of Simulating Water Behaviour

    ERIC Educational Resources Information Center

    Pretelín-Ricárdez, Angel; Sacristán, Ana Isabel

    2015-01-01

    We present some results of an ongoing research project where university engineering students were asked to construct videogames involving the use of physical systems models. The objective is to help them identify and understand the elements and concepts involved in the modelling process. That is, we use game design as a constructionist approach…

  2. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  3. The Use of Particle/Substrate Material Models in Simulation of Cold-Gas Dynamic-Spray Process

    NASA Astrophysics Data System (ADS)

    Rahmati, Saeed; Ghaei, Abbas

    2014-02-01

    Cold spray is a coating deposition method in which the solid particles are accelerated to the substrate using a low temperature supersonic gas flow. Many numerical studies have been carried out in the literature in order to study this process in more depth. Despite the inability of Johnson-Cook plasticity model in prediction of material behavior at high strain rates, it is the model that has been frequently used in simulation of cold spray. Therefore, this research was devoted to compare the performance of different material models in the simulation of cold spray process. Six different material models, appropriate for high strain-rate plasticity, were employed in finite element simulation of cold spray process for copper. The results showed that the material model had a considerable effect on the predicted deformed shapes.

  4. Optimal filtering and Bayesian detection for friction-based diagnostics in machines.

    PubMed

    Ray, L R; Townsend, J R; Ramasubramanian, A

    2001-01-01

    Non-model-based diagnostic methods typically rely on measured signals that must be empirically related to process behavior or incipient faults. The difficulty in interpreting a signal that is indirectly related to the fundamental process behavior is significant. This paper presents an integrated non-model and model-based approach to detecting when process behavior varies from a proposed model. The method, which is based on nonlinear filtering combined with maximum likelihood hypothesis testing, is applicable to dynamic systems whose constitutive model is well known, and whose process inputs are poorly known. Here, the method is applied to friction estimation and diagnosis during motion control in a rotating machine. A nonlinear observer estimates friction torque in a machine from shaft angular position measurements and the known input voltage to the motor. The resulting friction torque estimate can be analyzed directly for statistical abnormalities, or it can be directly compared to friction torque outputs of an applicable friction process model in order to diagnose faults or model variations. Nonlinear estimation of friction torque provides a variable on which to apply diagnostic methods that is directly related to model variations or faults. The method is evaluated experimentally by its ability to detect normal load variations in a closed-loop controlled motor driven inertia with bearing friction and an artificially-induced external line contact. Results show an ability to detect statistically significant changes in friction characteristics induced by normal load variations over a wide range of underlying friction behaviors.

  5. Predictive Models for Semiconductor Device Design and Processing

    NASA Technical Reports Server (NTRS)

    Meyyappan, Meyya; Arnold, James O. (Technical Monitor)

    1998-01-01

    The device feature size continues to be on a downward trend with a simultaneous upward trend in wafer size to 300 mm. Predictive models are needed more than ever before for this reason. At NASA Ames, a Device and Process Modeling effort has been initiated recently with a view to address these issues. Our activities cover sub-micron device physics, process and equipment modeling, computational chemistry and material science. This talk would outline these efforts and emphasize the interaction among various components. The device physics component is largely based on integrating quantum effects into device simulators. We have two parallel efforts, one based on a quantum mechanics approach and the second, a semiclassical hydrodynamics approach with quantum correction terms. Under the first approach, three different quantum simulators are being developed and compared: a nonequlibrium Green's function (NEGF) approach, Wigner function approach, and a density matrix approach. In this talk, results using various codes will be presented. Our process modeling work focuses primarily on epitaxy and etching using first-principles models coupling reactor level and wafer level features. For the latter, we are using a novel approach based on Level Set theory. Sample results from this effort will also be presented.

  6. Nonlinear dynamics that appears in the dynamical model of drying process of a polymer solution coated on a flat substrate

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2007-01-01

    We have proposed and modified the dynamical model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication and have presented the fruits through some meetings and so on. Though basic equations of the dynamical model have characteristic nonlinearity, character of the nonlinearity has not been studied enough yet. In this paper, at first, we derive nonlinear equations from the dynamical model of drying process of polymer solution. Then we introduce results of numerical simulations of the nonlinear equations and consider roles of various parameters. Some of them are indirectly concerned in strength of non-equilibriumity. Through this study, we approach essential qualities of nonlinearity in non-equilibrium process of drying process.

  7. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  8. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.

  9. Red mud flocculation process in alumina production

    NASA Astrophysics Data System (ADS)

    Fedorova, E. R.; Firsov, A. Yu

    2018-05-01

    The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.

  10. Cost Models for MMC Manufacturing Processes

    NASA Technical Reports Server (NTRS)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  11. Continuation-like semantics for modeling structural process anomalies

    PubMed Central

    2012-01-01

    Background Biomedical ontologies usually encode knowledge that applies always or at least most of the time, that is in normal circumstances. But for some applications like phenotype ontologies it is becoming increasingly important to represent information about aberrations from a norm. These aberrations may be modifications of physiological structures, but also modifications of biological processes. Methods To facilitate precise definitions of process-related phenotypes, such as delayed eruption of the primary teeth or disrupted ocular pursuit movements, I introduce a modeling approach that draws inspiration from the use of continuations in the analysis of programming languages and apply a similar idea to ontological modeling. This approach characterises processes by describing their outcome up to a certain point and the way they will continue in the canonical case. Definitions of process types are then given in terms of their continuations and anomalous phenotypes are defined by their differences to the canonical definitions. Results The resulting model is capable of accurately representing structural process anomalies. It allows distinguishing between different anomaly kinds (delays, interruptions), gives identity criteria for interrupted processes, and explains why normal and anomalous process instances can be subsumed under a common type, thus establishing the connection between canonical and anomalous process-related phenotypes. Conclusion This paper shows how to to give semantically rich definitions of process-related phenotypes. These allow to expand the application areas of phenotype ontologies beyond literature annotation and establishment of genotype-phenotype associations to the detection of anomalies in suitably encoded datasets. PMID:23046705

  12. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.

  13. Using gridded multimedia model to simulate spatial fate of Benzo[α]pyrene on regional scale.

    PubMed

    Liu, Shijie; Lu, Yonglong; Wang, Tieyu; Xie, Shuangwei; Jones, Kevin C; Sweetman, Andrew J

    2014-02-01

    Predicting the environmental multimedia fate is an essential step in the process of assessing the human exposure and health impacts of chemicals released into the environment. Multimedia fate models have been widely applied to calculate the fate and distribution of chemicals in the environment, which can serve as input to a human exposure model. In this study, a grid based multimedia fugacity model at regional scale was developed together with a case study modeling the fate and transfer of Benzo[α]pyrene (BaP) in Bohai coastal region, China. Based on the estimated emission and in-site survey in 2008, the BaP concentrations in air, vegetation, soil, fresh water, fresh water sediment and coastal water as well as the transfer fluxes were derived under the steady-state assumption. The model results were validated through comparison between the measured and modeled concentrations of BaP. The model results indicated that the predicted concentrations of BaP in air, fresh water, soil and sediment generally agreed with field observations. Model predictions suggest that soil was the dominant sink of BaP in terrestrial systems. Flow from air to soil, vegetation and costal water were three major pathways of BaP inter-media transport processes. Most of the BaP entering the sea was transferred by air flow, which was also the crucial driving force in the spatial distribution processes of BaP. The Yellow River, Liaohe River and Daliao River played an important role in the spatial transformation processes of BaP. Compared with advection outflow, degradation was more important in removal processes of BaP. Sensitivities of the model estimates to input parameters were tested. The result showed that emission rates, compartment dimensions, transport velocity and degradation rates of BaP were the most influential parameters for the model output. Monte Carlo simulation was carried out to determine parameter uncertainty, from which the coefficients of variation for the estimated BaP concentrations in air and soil were computed, which were 0.46 and 1.53, respectively. The model output-concentrations of BaP in multimedia environment can be used in human exposure and risk assessment in the Bohai coastal region. The results also provide significant indicators on the likely dominant fate, influence range of emission and transport processes determining behavior of BaP in the Bohai coastal region, which is instrumental in human exposure and risk assessment in the region. © 2013.

  14. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  15. Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.

    2017-12-01

    The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.

  16. Business Process Design Method Based on Business Event Model for Enterprise Information System Integration

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takashi; Komoda, Norihisa

    The traditional business process design methods, in which the usecase is the most typical, have no useful framework to design the activity sequence with. Therefore, the design efficiency and quality vary widely according to the designer’s experience and skill. In this paper, to solve this problem, we propose the business events and their state transition model (a basic business event model) based on the language/action perspective, which is the result in the cognitive science domain. In the business process design, using this model, we decide event occurrence conditions so that every event synchronizes with each other. We also propose the design pattern to decide the event occurrence condition (a business event improvement strategy). Lastly, we apply the business process design method based on the business event model and the business event improvement strategy to the credit card issue process and estimate its effect.

  17. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  18. Automatization of hydrodynamic modelling in a Floreon+ system

    NASA Astrophysics Data System (ADS)

    Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David

    2017-07-01

    The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.

  19. Physical and mathematical modeling of antimicrobial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Bürgermeister, Lisa; López, Fernando Romero; Schulz, Wolfgang

    2014-07-01

    Antimicrobial photodynamic therapy (aPDT) is a promising method to treat local bacterial infections. The therapy is painless and does not cause bacterial resistances. However, there are gaps in understanding the dynamics of the processes, especially in periodontal treatment. This work describes the advances in fundamental physical and mathematical modeling of aPDT used for interpretation of experimental evidence. The result is a two-dimensional model of aPDT in a dental pocket phantom model. In this model, the propagation of laser light and the kinetics of the chemical reactions are described as coupled processes. The laser light induces the chemical processes depending on its intensity. As a consequence of the chemical processes, the local optical properties and distribution of laser light change as well as the reaction rates. The mathematical description of these coupled processes will help to develop treatment protocols and is the first step toward an inline feedback system for aPDT users.

  20. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  1. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  2. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  3. A Mathematical Model for Continuous Fiber Reinforced Thermoplastic Composite in Melt Impregnation

    NASA Astrophysics Data System (ADS)

    Ren, Feng; Yu, Yang; Yang, Jianjun; Xin, Chunling; He, Yadong

    2017-06-01

    Through the combination of Reynolds equation and Darcy's law, a mathematical model was established to calculate the pressure distribution in wedge area, which contributed to the forecast effect of processing parameters on impregnation degree of the fiber bundle. The experiments were conducted to verify the capacity of the proposed model with satisfactory results, which means that the model is effective in predicting the influence of processing parameters on impregnation. From the mathematical model, it was known that the impregnation degree of the fiber bundle would be improved by increasing the processing temperature, number and radius of pins, or decreasing the pulling speed and the center distance of pins, which provided a possible solution to the difficulty of melt with high viscosity in melt impregnation and optimization of impregnation processing.

  4. Flexible link functions in nonparametric binary regression with Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Lin, Lizhen; Dey, Dipak K

    2016-09-01

    In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. © 2015, The International Biometric Society.

  5. Flexible Link Functions in Nonparametric Binary Regression with Gaussian Process Priors

    PubMed Central

    Li, Dan; Lin, Lizhen; Dey, Dipak K.

    2015-01-01

    Summary In many scientific fields, it is a common practice to collect a sequence of 0-1 binary responses from a subject across time, space, or a collection of covariates. Researchers are interested in finding out how the expected binary outcome is related to covariates, and aim at better prediction in the future 0-1 outcomes. Gaussian processes have been widely used to model nonlinear systems; in particular to model the latent structure in a binary regression model allowing nonlinear functional relationship between covariates and the expectation of binary outcomes. A critical issue in modeling binary response data is the appropriate choice of link functions. Commonly adopted link functions such as probit or logit links have fixed skewness and lack the flexibility to allow the data to determine the degree of the skewness. To address this limitation, we propose a flexible binary regression model which combines a generalized extreme value link function with a Gaussian process prior on the latent structure. Bayesian computation is employed in model estimation. Posterior consistency of the resulting posterior distribution is demonstrated. The flexibility and gains of the proposed model are illustrated through detailed simulation studies and two real data examples. Empirical results show that the proposed model outperforms a set of alternative models, which only have either a Gaussian process prior on the latent regression function or a Dirichlet prior on the link function. PMID:26686333

  6. Simulation of generation of new ideas for new product development and IT services

    NASA Astrophysics Data System (ADS)

    Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda

    2015-02-01

    This paper describes a dynamic model of the New Product Development (NPD) process. The model has been occurring from best practice noticed in our research conducted at a range of situations. The model contributes to determine and put an IT company's NPD activities into the frame of the overall NPD process[1]. It has been found to be a useful tool for organizing data on IT company's NPD activities without enforcement an excessively restrictive research methodology refers to the model of NPD. The framework, which strengthens the model, will help to promote a research of the methods undertaken within an IT company's NPD process, thus promoting understanding and improvement of the simulation process[2]. IT companies tested many techniques with several different practices designed to improve the validity and efficacy of their NPD process[3]. Supported by the model, this research examines how widely accepted stated tactics are and what impact these best tactics have on NPD performance. The main assumption of this study is that simulation of generation of new ideas[4] will lead to greater NPD effectiveness and more successful products in IT companies. With the model implementation, practices concern the implementation strategies of NPD (product selection, objectives, leadership, marketing strategy and customer satisfaction) are all more widely accepted than best practices related with controlling the application of NPD (process control, measurements, results). In linking simulation with impact, our results states product success depends on developing strong products and ensuring organizational emphasis, through proper project selection. Project activities strengthens both product and project success. IT products and services success also depends on monitoring the NPD procedure through project management and ensuring team consistency with group rewards. Sharing experiences between projects can positively influence the NPD process.

  7. Evaluation of massless-spring modeling of suspension-line elasticity during the parachute unfurling process

    NASA Technical Reports Server (NTRS)

    Poole, L. R.; Huckins, E. K., III

    1972-01-01

    A general theory on mathematical modeling of elastic parachute suspension lines during the unfurling process was developed. Massless-spring modeling of suspension-line elasticity was evaluated in detail. For this simple model, equations which govern the motion were developed and numerically integrated. The results were compared with flight test data. In most regions, agreement was satisfactory. However, poor agreement was obtained during periods of rapid fluctuations in line tension.

  8. Diversity's Impact on the Executive Coaching Process

    ERIC Educational Resources Information Center

    Maltbia, Terrence E.; Power, Anne

    2005-01-01

    This paper presents a conceptual model intended to expand existing executive coaching processes used in organizations by building the strategic learning capabilities needed to integrate a diversity perspective into this emerging field of HRD practice. This model represents the early development of results from a Diversity Practitioner Study…

  9. Evaluating the Risks: A Bernoulli Process Model of HIV Infection and Risk Reduction.

    ERIC Educational Resources Information Center

    Pinkerton, Steven D.; Abramson, Paul R.

    1993-01-01

    A Bernoulli process model of human immunodeficiency virus (HIV) is used to evaluate infection risks associated with various sexual behaviors (condom use, abstinence, or monogamy). Results suggest that infection is best mitigated through measures that decrease infectivity, such as condom use. (SLD)

  10. Modeling nitrate-nitrogen removal process in first-flush reactor for stormwater treatment.

    PubMed

    Deng, Zhiqiang; Sun, Shaowei; Gang, Daniel Dianchen

    2012-08-01

    Stormwater runoff is one of the most common non-point sources of water pollution to rivers, lakes, estuaries, and coastal beaches. While most pollutants and nutrients, including nitrate-nitrogen, in stormwater are discharged into receiving waters during the first-flush period, no existing best management practices (BMPs) are specifically designed to capture and treat the first-flush portion of urban stormwater runoff. This paper presents a novel BMP device for highway and urban stormwater treatment with emphasis on numerical modeling of the new BMP, called first-flush reactor (FFR). A new model, called VART-DN model, for simulation of denitrification process in the designed first-flush reactor was developed using the variable residence time (VART) model. The VART-DN model is capable of simulating various processes and mechanisms responsible for denitrification in the FFR. Based on sensitivity analysis results of model parameters, the denitrification process is sensitive to the temperature correction factor (b), maximum nitrate-nitrogen decay rate (K (max)), actual varying residence time (T (v)), the constant decay rate of denitrifiying bacteria (v (dec)), temperature (T), biomass inhibition constant (K (b)), maximum growth rate of denitrifiying bacteria (v (max)), denitrifying bacteria concentration (X), longitudinal dispersion coefficient (K (s)), and half-saturation constant of dissolved carbon for biomass (K (Car-X)); a 10% increase in the model parameter values causes a change in model root mean square error (RMSE) of -28.02, -16.16, -12.35, 11.44, -9.68, 10.61, -16.30, -9.27, 6.58 and 3.89%, respectively. The VART-DN model was tested using the data from laboratory experiments conducted using highway stormwater and secondary wastewater. Model results for the denitrification process of highway stormwater showed a good agreement with observed data and the simulation error was less than 9.0%. The RMSE and the coefficient of determination for simulating denitrification process of wastewater were 0.5167 and 0.6912, respectively, demonstrating the efficacy of the VART-DN model.

  11. Effect of processing parameters on FDM process

    NASA Astrophysics Data System (ADS)

    Chari, V. Srinivasa; Venkatesh, P. R.; Krupashankar, Dinesh, Veena

    2018-04-01

    This paper focused on the process parameters on fused deposition modeling (FDM). Infill, resolution, temperature are the process variables considered for experimental studies. Compression strength, Hardness test microstructure are the outcome parameters, this experimental study done based on the taguchi's L9 orthogonal array is used. Taguchi array used to build the 9 different models and also to get the effective output results on the under taken parameters. The material used for this experimental study is Polylactic Acid (PLA).

  12. Modeling of pulsed propellant reorientation

    NASA Technical Reports Server (NTRS)

    Patag, A. E.; Hochstein, J. I.; Chato, D. J.

    1989-01-01

    Optimization of the propellant reorientation process can provide increased payload capability and extend the service life of spacecraft. The use of pulsed propellant reorientation to optimize the reorientation process is proposed. The ECLIPSE code was validated for modeling the reorientation process and is used to study pulsed reorientation in small-scale and full-scale propellant tanks. A dimensional analysis of the process is performed and the resulting dimensionless groups are used to present and correlate the computational predictions for reorientation performance.

  13. Improving operational anodising process performance using simulation approach

    NASA Astrophysics Data System (ADS)

    Liong, Choong-Yeun; Ghazali, Syarah Syahidah

    2015-10-01

    The use of aluminium is very widespread, especially in transportation, electrical and electronics, architectural, automotive and engineering applications sectors. Therefore, the anodizing process is an important process for aluminium in order to make the aluminium durable, attractive and weather resistant. This research is focused on the anodizing process operations in manufacturing and supplying of aluminium extrusion. The data required for the development of the model is collected from the observations and interviews conducted in the study. To study the current system, the processes involved in the anodizing process are modeled by using Arena 14.5 simulation software. Those processes consist of five main processes, namely the degreasing process, the etching process, the desmut process, the anodizing process, the sealing process and 16 other processes. The results obtained were analyzed to identify the problems or bottlenecks that occurred and to propose improvement methods that can be implemented on the original model. Based on the comparisons that have been done between the improvement methods, the productivity could be increased by reallocating the workers and reducing loading time.

  14. A functional-dynamic reflection on participatory processes in modeling projects.

    PubMed

    Seidl, Roman

    2015-12-01

    The participation of nonscientists in modeling projects/studies is increasingly employed to fulfill different functions. However, it is not well investigated if and how explicitly these functions and the dynamics of a participatory process are reflected by modeling projects in particular. In this review study, I explore participatory modeling projects from a functional-dynamic process perspective. The main differences among projects relate to the functions of participation-most often, more than one per project can be identified, along with the degree of explicit reflection (i.e., awareness and anticipation) on the dynamic process perspective. Moreover, two main approaches are revealed: participatory modeling covering diverse approaches and companion modeling. It becomes apparent that the degree of reflection on the participatory process itself is not always explicit and perfectly visible in the descriptions of the modeling projects. Thus, the use of common protocols or templates is discussed to facilitate project planning, as well as the publication of project results. A generic template may help, not in providing details of a project or model development, but in explicitly reflecting on the participatory process. It can serve to systematize the particular project's approach to stakeholder collaboration, and thus quality management.

  15. Revisiting low-fidelity two-fluid models for gas–solids transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adeleke, Najeem, E-mail: najm@psu.edu; Adewumi, Michael, E-mail: m2a@psu.edu; Ityokumbul, Thaddeus

    Two-phase gas–solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas–solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The modelmore » equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe–Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.« less

  16. Numerical Simulation of Rheological, Chemical and Hydromechanical Processes of Thrombolysis

    NASA Astrophysics Data System (ADS)

    Khramchenkov, E.; Khramchenkov, M.

    2015-04-01

    Mathematical model of clot lysis in blood vessels is developed on the basis of equations of convection-diffusion. Fibrin of the clot is considered stationary solid phase, and plasminogen, plasmin and plasminogen-activators - as dissolved fluid phases. As a result of numerical solution of the model predictions of lysis process are gained. Important influence of clot swelling on the process of lysis is revealed.

  17. The Challenges to Coupling Dynamic Geospatial Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, N

    2006-06-23

    Many applications of modeling spatial dynamic systems focus on a single system and a single process, ignoring the geographic and systemic context of the processes being modeled. A solution to this problem is the coupled modeling of spatial dynamic systems. Coupled modeling is challenging for both technical reasons, as well as conceptual reasons. This paper explores the benefits and challenges to coupling or linking spatial dynamic models, from loose coupling, where information transfer between models is done by hand, to tight coupling, where two (or more) models are merged as one. To illustrate the challenges, a coupled model of Urbanizationmore » and Wildfire Risk is presented. This model, called Vesta, was applied to the Santa Barbara, California region (using real geospatial data), where Urbanization and Wildfires occur and recur, respectively. The preliminary results of the model coupling illustrate that coupled modeling can lead to insight into the consequences of processes acting on their own.« less

  18. Transactions in domain-specific information systems

    NASA Astrophysics Data System (ADS)

    Zacek, Jaroslav

    2017-07-01

    Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.

  19. Numerical Modeling of Crystal of ZnSe by Physical Vapor Transport - Towards a more Comprehensive Formulations

    NASA Technical Reports Server (NTRS)

    Ramachandran, N.

    1999-01-01

    Crystal growth from the vapor phase has various advantages over melt growth. The main advantage is from a lower processing temperature which makes the process more amenable in instances where the melting temperature of the crystal is high. Other benefits stem from the inherent purification mechanism in the process due to differences in the vapor pressures of the native elements and impurities, and the enhanced interfacial morphological stability during the growth process. Further, the implementation of PVT growth in closed ampoules affords experimental simplicity with minimal needs for complex process control which makes it an ideal candidate for space investigations in systems where gravity tends to have undesirable effects on the growth process. Bulk growth of wide band gap II-VI semiconductors by physical vapor transport has been developed and refined over the past several years at NASA MSFC. Results from a modeling study of PVT crystal growth of ZnSe are reported in this paper. The PVT process is numerically investigated using both two-dimensional and fully three-dimensional formulation of the governing equations and associated boundary conditions. Both the incompressible Boussinesq approximation and the compressible model are tested to determine the influence of gravity on the process and to discern the differences between the two approaches. The influence of a residual gas is included in the models. The results show that both the incompressible and compressible approximations provide comparable results and the presence of a residual gas tends to measurably reduce the mass flux in the system. Detailed flow, thermal and concentration profiles will be provided in the final manuscript along with computed heat and mass transfer rates. Comparisons with the 1-D model will also be provided. The effect of gravity on the process from numerical computations shows subtle effects although experimental evidence from vertically and horizontally grown samples show dramatic evidence of gravitational effects. The shortcomings of the problem formulation will be discussed and a framework will be provided leading up towards a more comprehensive model of PVT systems.

  20. [Research on optimal modeling strategy for licorice extraction process based on near-infrared spectroscopy technology].

    PubMed

    Wang, Hai-Xia; Suo, Tong-Chuan; Yu, He-Shui; Li, Zheng

    2016-10-01

    The manufacture of traditional Chinese medicine (TCM) products is always accompanied by processing complex raw materials and real-time monitoring of the manufacturing process. In this study, we investigated different modeling strategies for the extraction process of licorice. Near-infrared spectra associate with the extraction time was used to detemine the states of the extraction processes. Three modeling approaches, i.e., principal component analysis (PCA), partial least squares regression (PLSR) and parallel factor analysis-PLSR (PARAFAC-PLSR), were adopted for the prediction of the real-time status of the process. The overall results indicated that PCA, PLSR and PARAFAC-PLSR can effectively detect the errors in the extraction procedure and predict the process trajectories, which has important significance for the monitoring and controlling of the extraction processes. Copyright© by the Chinese Pharmaceutical Association.

  1. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    NASA Astrophysics Data System (ADS)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  2. Functional Fault Model Development Process to Support Design Analysis and Operational Assessment

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Maul, William A.; Hemminger, Joseph A.

    2016-01-01

    A functional fault model (FFM) is an abstract representation of the failure space of a given system. As such, it simulates the propagation of failure effects along paths between the origin of the system failure modes and points within the system capable of observing the failure effects. As a result, FFMs may be used to diagnose the presence of failures in the modeled system. FFMs necessarily contain a significant amount of information about the design, operations, and failure modes and effects. One of the important benefits of FFMs is that they may be qualitative, rather than quantitative and, as a result, may be implemented early in the design process when there is more potential to positively impact the system design. FFMs may therefore be developed and matured throughout the monitored system's design process and may subsequently be used to provide real-time diagnostic assessments that support system operations. This paper provides an overview of a generalized NASA process that is being used to develop and apply FFMs. FFM technology has been evolving for more than 25 years. The FFM development process presented in this paper was refined during NASA's Ares I, Space Launch System, and Ground Systems Development and Operations programs (i.e., from about 2007 to the present). Process refinement took place as new modeling, analysis, and verification tools were created to enhance FFM capabilities. In this paper, standard elements of a model development process (i.e., knowledge acquisition, conceptual design, implementation & verification, and application) are described within the context of FFMs. Further, newer tools and analytical capabilities that may benefit the broader systems engineering process are identified and briefly described. The discussion is intended as a high-level guide for future FFM modelers.

  3. Application of Mathematical and Three-Dimensional Computer Modeling Tools in the Planning of Processes of Fuel and Energy Complexes

    NASA Astrophysics Data System (ADS)

    Aksenova, Olesya; Nikolaeva, Evgenia; Cehlár, Michal

    2017-11-01

    This work aims to investigate the effectiveness of mathematical and three-dimensional computer modeling tools in the planning of processes of fuel and energy complexes at the planning and design phase of a thermal power plant (TPP). A solution for purification of gas emissions at the design development phase of waste treatment systems is proposed employing mathematical and three-dimensional computer modeling - using the E-nets apparatus and the development of a 3D model of the future gas emission purification system. Which allows to visualize the designed result, to select and scientifically prove economically feasible technology, as well as to ensure the high environmental and social effect of the developed waste treatment system. The authors present results of a treatment of planned technological processes and the system for purifying gas emissions in terms of E-nets. using mathematical modeling in the Simulink application. What allowed to create a model of a device from the library of standard blocks and to perform calculations. A three-dimensional model of a system for purifying gas emissions has been constructed. It allows to visualize technological processes and compare them with the theoretical calculations at the design phase of a TPP and. if necessary, make adjustments.

  4. The dynamics of decision making in risky choice: an eye-tracking analysis.

    PubMed

    Fiedler, Susann; Glöckner, Andreas

    2012-01-01

    In the last years, research on risky choice has moved beyond analyzing choices only. Models have been suggested that aim to describe the underlying cognitive processes and some studies have tested process predictions of these models. Prominent approaches are evidence accumulation models such as decision field theory (DFT), simple serial heuristic models such as the adaptive toolbox, and connectionist approaches such as the parallel constraint satisfaction (PCS) model. In two studies involving measures of attention and pupil dilation, we investigate hypotheses derived from these models in choices between two gambles with two outcomes each. We show that attention to an outcome of a gamble increases with its probability and its value and that attention shifts toward the subsequently favored gamble after about two thirds of the decision process, indicating a gaze-cascade effect. Information search occurs mostly within-gambles, and the direction of search does not change over the course of decision making. Pupil dilation, which reflects both cognitive effort and arousal, increases during the decision process and increases with mean expected value. Overall, the results support aspects of automatic integration models for risky choice such as DFT and PCS, but in their current specification none of them can account for the full pattern of results.

  5. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    PubMed

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  6. Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale

    NASA Astrophysics Data System (ADS)

    Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-08-01

    This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.

  7. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  8. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    NASA Astrophysics Data System (ADS)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7

  9. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  10. Modelling rollover behaviour of exacavator-based forest machines

    Treesearch

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  11. Towards a more efficient and robust representation of subsurface hydrological processes in Earth System Models

    NASA Astrophysics Data System (ADS)

    Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.

    2017-12-01

    Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.

  12. Improved compliance by BPM-driven workflow automation.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  13. Impact of selected troposphere models on Precise Point Positioning convergence

    NASA Astrophysics Data System (ADS)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first hour of processing. Finally, the results have been compared against results obtained during calm tropospheric conditions.

  14. Modeling critical zone processes in intensively managed environments

    NASA Astrophysics Data System (ADS)

    Kumar, Praveen; Le, Phong; Woo, Dong; Yan, Qina

    2017-04-01

    Processes in the Critical Zone (CZ), which sustain terrestrial life, are tightly coupled across hydrological, physical, biochemical, and many other domains over both short and long timescales. In addition, vegetation acclimation resulting from elevated atmospheric CO2 concentration, along with response to increased temperature and altered rainfall pattern, is expected to result in emergent behaviors in ecologic and hydrologic functions, subsequently controlling CZ processes. We hypothesize that the interplay between micro-topographic variability and these emergent behaviors will shape complex responses of a range of ecosystem dynamics within the CZ. Here, we develop a modeling framework ('Dhara') that explicitly incorporates micro-topographic variability based on lidar topographic data with coupling of multi-layer modeling of the soil-vegetation continuum and 3-D surface-subsurface transport processes to study ecological and biogeochemical dynamics. We further couple a C-N model with a physically based hydro-geomorphologic model to quantify (i) how topographic variability controls the spatial distribution of soil moisture, temperature, and biogeochemical processes, and (ii) how farming activities modify the interaction between soil erosion and soil organic carbon (SOC) dynamics. To address the intensive computational demand from high-resolution modeling at lidar data scale, we use a hybrid CPU-GPU parallel computing architecture run over large supercomputing systems for simulations. Our findings indicate that rising CO2 concentration and air temperature have opposing effects on soil moisture, surface water and ponding in topographic depressions. Further, the relatively higher soil moisture and lower soil temperature contribute to decreased soil microbial activities in the low-lying areas due to anaerobic conditions and reduced temperatures. The decreased microbial relevant processes cause the reduction of nitrification rates, resulting in relatively lower nitrate concentration. Results from geomorphologic model also suggest that soil erosion and deposition plays a dominant role in SOC both above- and below-ground. In addition, tillage can change the amplitude and frequency of C-N oscillation. This work sheds light in developing practical means for reducing soil erosion and carbon loss when the landscape is affected by human activities.

  15. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  16. Multi-scale coupled modelling of waves and currents on the Catalan shelf.

    NASA Astrophysics Data System (ADS)

    Grifoll, M.; Warner, J. C.; Espino, M.; Sánchez-Arcilla, A.

    2012-04-01

    Catalan shelf circulation is characterized by a background along-shelf flow to the southwest (including some meso-scale features) plus episodic storm driven patterns. To investigate these dynamics, a coupled multi-scale modeling system is applied to the Catalan shelf (North-western Mediterranean Sea). The implementation consists of a set of increasing-resolution nested models, based on the circulation model ROMS and the wave model SWAN as part of the COAWST modeling system, covering from the slope and shelf region (~1 km horizontal resolution) down to a local area around Barcelona city (~40 m). The system is initialized with MyOcean products in the coarsest outer domain, and uses atmospheric forcing from other sources for the increasing resolution inner domains. Results of the finer resolution domains exhibit improved agreement with observations relative to the coarser model results. Several hydrodynamic configurations were simulated to determine dominant forcing mechanisms and hydrodynamic processes that control coastal scale processes. The numerical results reveal that the short term (hours to days) inner-shelf variability is strongly influenced by local wind variability, while sea-level slope, baroclinic effects, radiation stresses and regional circulation constitute second-order processes. Additional analysis identifies the significance of shelf/slope exchange fluxes, river discharge and the effect of the spatial resolution of the atmospheric fluxes.

  17. Numerical analysis of the heating phase and densification mechanism in polymers selective laser melting process

    NASA Astrophysics Data System (ADS)

    Mokrane, Aoulaiche; Boutaous, M'hamed; Xin, Shihe

    2018-05-01

    The aim of this work is to address a modeling of the SLS process at the scale of the part in PA12 polymer powder bed. The powder bed is considered as a continuous medium with homogenized properties, meanwhile understanding multiple physical phenomena occurring during the process and studying the influence of process parameters on the quality of final product. A thermal model, based on enthalpy approach, will be presented with details on the multiphysical couplings that allow the thermal history: laser absorption, melting, coalescence, densification, volume shrinkage and on numerical implementation using FV method. The simulations were carried out in 3D with an in-house developed FORTRAN code. After validation of the model with comparison to results from literature, a parametric analysis will be proposed. Some original results as densification process and the thermal history with the evolution of the material, from the granular solid state to homogeneous melted state will be discussed with regards to the involved physical phenomena.

  18. Brugga basin's TACD Model Adaptation to current GIS PCRaster 4.1

    NASA Astrophysics Data System (ADS)

    Lopez Rozo, Nicolas Antonio; Corzo Perez, Gerald Augusto; Santos Granados, Germán Ricardo

    2017-04-01

    The process-oriented catchment model TACD (Tracer-Aided Catchment model - Distributed) was developed in the Brugga Basin (Dark Forest, Germany) with a modular structure in the Geographic Information System PCRaster Version 2, in order to dynamically model the natural processes of a complex Basin, such as rainfall, air temperature, solar radiation, evapotranspiration and flow routing among others. Further research and application on this model has been done, such as adapting other meso-scaled basins and adding erosion processes in the hydrological model. However, TACD model is computationally intensive. This has made it not efficient on large and well discretized river basins. Aswell, the current version is not compatible with latest PCRaster Version 4.1, which offers new capabilities on 64-bit hardware architecture, hydraulic calculation improvements, in maps creation, some error and bug fixes. The current work studied and adapted TACD model into the latest GIS PCRaster Version 4.1. This was done by editing the original scripts, replacing deprecated functionalities without losing correctness of the TACD model. The correctness of the adapted TACD model was verified by using the original study case of the Brugga Basin and comparing the adapted model results with the original model results by Stefan Roser in 2001. Small differences were found due to the fact that some hydraulic and hydrological routines were optimized since version 2 of GIS PCRaster. Therefore, the hydraulic and hydrological processes are well represented. With this new working model, further research and development on current topics like uncertainty analysis, GCM downscaling techniques and spatio-temporal modelling are encouraged.

  19. Modeling the VARTM Composite Manufacturing Process

    NASA Technical Reports Server (NTRS)

    Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal

    2004-01-01

    A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.

  20. Improvement of the model for surface process of tritium release from lithium oxide

    NASA Astrophysics Data System (ADS)

    Yamaki, Daiju; Iwamoto, Akira; Jitsukawa, Shiro

    2000-12-01

    Among the various tritium transport processes in lithium ceramics, the importance and the detailed mechanism of surface reactions remain to be elucidated. The dynamic adsorption and desorption model for tritium desorption from lithium ceramics, especially Li 2O was constructed. From the experimental results, it was considered that both H 2 and H 2O are dissociatively adsorbed on Li 2O and generate OH - on the surface. In the first model developed in 1994, it was assumed that either the dissociative adsorption of H 2 or H 2O on Li 2O generates two OH - on the surface. However, recent calculation results show that the generation of one OH - and one H - is more stable than that of two OH -s by the dissociative adsorption of H 2. Therefore, assumption of H 2 adsorption and desorption in the first model is improved and the tritium release behavior from Li 2O surface is evaluated again by using the improved model. The tritium residence time on the Li 2O surface is calculated using the improved model, and the results are compared with the experimental results. The calculation results using the improved model agree well with the experimental results than those using the first model.

  1. Mathematical Modeling of Decarburization in Levitated Fe-Cr-C Droplets

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Shi, Zhe; Yang, Yindong; Li, Donghui; Zhang, Guifang; McLean, Alexander; Chattopadhyay, Kinnor

    2018-04-01

    Using carbon dioxide to replace oxygen as an alternative oxidant gas has proven to be a viable solution in the decarburization process, with potential for industrial applications. In a recent study, the transport phenomena governing the carbon dioxide decarburization process through the use of electromagnetic levitation (EML) was examined. CO2/CO mass transfer was found to be the principal reaction rate control step, as a result gas diffusion has gained significant attention. In the present study, gas diffusion during decarburization process was investigated using computational fluid dynamics (CFD) modeling coupled with chemical reactions. The resulting model was verified through experimental data in a published paper, and employed to provide insights on phenomena typically unobservable through experiments. Based on the results, a new correction of the Frössling equation was presented which better represents the mass transfer phenomena at the metal-gas interface within the range of this research.

  2. Modeling of Thermal Conductivity of CVI-Densified Composites at Fiber and Bundle Level

    PubMed Central

    Guan, Kang; Wu, Jianqing; Cheng, Laifei

    2016-01-01

    The evolution of the thermal conductivities of the unidirectional, 2D woven and 3D braided composites during the CVI (chemical vapor infiltration) process have been numerically studied by the finite element method. The results show that the dual-scale pores play an important role in the thermal conduction of the CVI-densified composites. According to our results, two thermal conductivity models applicable for CVI process have been developed. The sensitivity analysis demonstrates the parameter with the most influence on the CVI-densified composites’ thermal conductivity is matrix cracking’s density, followed by volume fraction of the bundle and thermal conductance of the matrix cracks, finally by micro-porosity inside the bundles and macro-porosity between the bundles. The obtained results are well consistent with the reported data, thus our models could be useful for designing the processing and performance of the CVI-densified composites. PMID:28774130

  3. Forest Canopy Processes in a Regional Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Makar, Paul; Staebler, Ralf; Akingunola, Ayodeji; Zhang, Junhua; McLinden, Chris; Kharol, Shailesh; Moran, Michael; Robichaud, Alain; Zhang, Leiming; Stroud, Craig; Pabla, Balbir; Cheung, Philip

    2016-04-01

    Forest canopies have typically been absent or highly parameterized in regional chemical transport models. Some forest-related processes are often considered - for example, biogenic emissions from the forests are included as a flux lower boundary condition on vertical diffusion, as is deposition to vegetation. However, real forest canopies comprise a much more complicated set of processes, at scales below the "transport model-resolved scale" of vertical levels usually employed in regional transport models. Advective and diffusive transport within the forest canopy typically scale with the height of the canopy, and the former process tends to dominate over the latter. Emissions of biogenic hydrocarbons arise from the foliage, which may be located tens of metres above the surface, while emissions of biogenic nitric oxide from decaying plant matter are located at the surface - in contrast to the surface flux boundary condition usually employed in chemical transport models. Deposition, similarly, is usually parameterized as a flux boundary condition, but may be differentiated between fluxes to vegetation and fluxes to the surface when the canopy scale is considered. The chemical environment also changes within forest canopies: shading, temperature, and relativity humidity changes with height within the canopy may influence chemical reaction rates. These processes have been observed in a host of measurement studies, and have been simulated using site-specific one-dimensional forest canopy models. Their influence on regional scale chemistry has been unknown, until now. In this work, we describe the results of the first attempt to include complex canopy processes within a regional chemical transport model (GEM-MACH). The original model core was subdivided into "canopy" and "non-canopy" subdomains. In the former, three additional near-surface layers based on spatially and seasonally varying satellite-derived canopy height and leaf area index were added to the original model structure. Process methodology for deposition, biogenic emissions, shading, vertical diffusion, advection, chemical reactive environment and particle microphysics were modified to account for expected conditions within the forest canopy and the additional layers. The revised and original models were compared for a 10km resolution domain covering North America, for a one-month duration simulation. The canopy processes were found to have a very significant impact on model results. We will present a comparison to network observations which suggests that forest canopy processes may account for previously unexplained local and regional biases in model ozone predictions noted in GEM-MACH and other models. The impact of the canopy processes on NO2, PM2.5, and SO2 performance will also be presented and discussed.

  4. How sensitive are estimates of carbon fixation in agricultural models to input data?

    PubMed Central

    2012-01-01

    Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931

  5. Fault detection of Tennessee Eastman process based on topological features and SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Huiyang; Hu, Yanzhu; Ai, Xinbo; Hu, Yu; Meng, Zhen

    2018-03-01

    Fault detection in industrial process is a popular research topic. Although the distributed control system(DCS) has been introduced to monitor the state of industrial process, it still cannot satisfy all the requirements for fault detection of all the industrial systems. In this paper, we proposed a novel method based on topological features and support vector machine(SVM), for fault detection of industrial process. The proposed method takes global information of measured variables into account by complex network model and predicts whether a system has generated some faults or not by SVM. The proposed method can be divided into four steps, i.e. network construction, network analysis, model training and model testing respectively. Finally, we apply the model to Tennessee Eastman process(TEP). The results show that this method works well and can be a useful supplement for fault detection of industrial process.

  6. Absorptivity Measurements and Heat Source Modeling to Simulate Laser Cladding

    NASA Astrophysics Data System (ADS)

    Wirth, Florian; Eisenbarth, Daniel; Wegener, Konrad

    The laser cladding process gains importance, as it does not only allow the application of surface coatings, but also additive manufacturing of three-dimensional parts. In both cases, process simulation can contribute to process optimization. Heat source modeling is one of the main issues for an accurate model and simulation of the laser cladding process. While the laser beam intensity distribution is readily known, the other two main effects on the process' heat input are non-trivial. Namely the measurement of the absorptivity of the applied materials as well as the powder attenuation. Therefore, calorimetry measurements were carried out. The measurement method and the measurement results for laser cladding of Stellite 6 on structural steel S 235 and for the processing of Inconel 625 are presented both using a CO2 laser as well as a high power diode laser (HPDL). Additionally, a heat source model is deduced.

  7. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  8. Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process

    NASA Astrophysics Data System (ADS)

    Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.

    2018-01-01

    The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.

  9. Gaia DR2 documentation Chapter 3: Astrometry

    NASA Astrophysics Data System (ADS)

    Hobbs, D.; Lindegren, L.; Bastian, U.; Klioner, S.; Butkevich, A.; Stephenson, C.; Hernandez, J.; Lammers, U.; Bombrun, A.; Mignard, F.; Altmann, M.; Davidson, M.; de Bruijne, J. H. J.; Fernández-Hernández, J.; Siddiqui, H.; Utrilla Molina, E.

    2018-04-01

    This chapter of the Gaia DR2 documentation describes the models and processing steps used for the astrometric core solution, namely, the Astrometric Global Iterative Solution (AGIS). The inputs to this solution rely heavily on the basic observables (or astrometric elementaries) which have been pre-processed and discussed in Chapter 2, the results of which were published in Fabricius et al. (2016). The models consist of reference systems and time scales; assumed linear stellar motion and relativistic light deflection; in addition to fundamental constants and the transformation of coordinate systems. Higher level inputs such as: planetary and solar system ephemeris; Gaia tracking and orbit information; initial quasar catalogues and BAM data are all needed for the processing described here. The astrometric calibration models are outlined followed by the details processing steps which give AGIS its name. We also present a basic quality assessment and validation of the scientific results (for details, see Lindegren et al. 2018).

  10. Better models are more effectively connected models

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity can be represented in models: either by allowing it to emerge from model behaviour or by parameterizing it inside model structures; and on the appropriate scale at which processes should be represented explicitly or implicitly. It will also explore how modellers themselves approach connectivity through the results of a community survey. Finally, it will present the outline of an international modelling exercise aimed at assessing how different modelling concepts can capture connectivity in real catchments.

  11. Standard model light-by-light scattering in SANC: Analytic and numeric evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.r

    2010-11-15

    The implementation of the Standard Model process {gamma}{gamma} {yields} {gamma}{gamma} through a fermion and boson loop into the framework of SANC system and additional precomputation modules used for calculation of massive box diagrams are described. The computation of this process takes into account nonzero mass of loop particles. The covariant and helicity amplitudes for this process, some particular cases of D{sub 0} and C{sub 0} Passarino-Veltman functions, and also numerical results of corresponding SANC module evaluation are presented. Whenever possible, the results are compared with those existing in the literature.

  12. Numerical simulation and experimental study on farmland nitrogen loss to surface runoff in a raindrop driven process

    NASA Astrophysics Data System (ADS)

    Li, Jiayun; Tong, Juxiu; Xia, Chuanan; Hu, Bill X.; Zhu, Hao; Yang, Rui; Wei, Wenshuo

    2017-06-01

    It has been widely recognized that surface runoff from agricultural field is an important non-point pollution source, which however, the chemical transfer amount in the process is very difficult to be quantified in field since some variables and natural factors are hard to control, such as rainfall intensity, temperature, wind speeds and soil spatial heterogeneity, which may significantly affect the field experimental results. Therefore, a physically based nitrogen transport model was developed and tested with the so called semi-field experiments (i.e., artificial rainfall was used instead of natural rainfall, but other conditions were natural) in this paper. Our model integrated the raindrop driven process and diffusion effect with the simplified nitrogen chain reactions. In this model, chemicals in the soil surface layer, or the 'exchange layer', were transformed into the surface runoff layer due to raindrop impact. The raindrops also have a significant role on the diffusion process between the exchange layer and the underlying soil. The established mathematical model was solved numerically through the modified Hydrus-1d source code, and the model simulations agreed well with the experimental data. The modeling results indicate that the depth of the exchange layer and raindrop induced water transfer rate are two important parameters for the simulation results. Variation of the water transfer rate, er, can strongly influence the peak values of the NO-3-N and NH+4-N concentration breakthrough curves. The concentration of NO-3-N is more sensitive to the exchange layer depth, de, than NH+4-N. In general, the developed model well describes the nitrogen loss into surface runoff in a raindrop driven process. Since the raindrop splash erosion process may aggravate the loss of chemical fertilizer, choosing an appropriate fertilization time and application method is very important to prevent the pollution.

  13. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  14. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  15. Web Based Semi-automatic Scientific Validation of Models of the Corona and Inner Heliosphere

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Chulaki, A.; Taktakishvili, A.; Kuznetsova, M. M.

    2013-12-01

    Validation is a critical step in preparing models of the corona and inner heliosphere for future roles supporting either or both the scientific research community and the operational space weather forecasting community. Validation of forecasting quality tends to focus on a short list of key features in the model solutions, with an unchanging order of priority. Scientific validation exposes a much larger range of physical processes and features, and as the models evolve to better represent features of interest, the research community tends to shift its focus to other areas which are less well understood and modeled. Given the more comprehensive and dynamic nature of scientific validation, and the limited resources available to the community to pursue this, it is imperative that the community establish a semi-automated process which engages the model developers directly into an ongoing and evolving validation process. In this presentation we describe the ongoing design and develpment of a web based facility to enable this type of validation of models of the corona and inner heliosphere, on the growing list of model results being generated, and on strategies we have been developing to account for model results that incorporate adaptively refined numerical grids.

  16. Online Deviation Detection for Medical Processes

    PubMed Central

    Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.

    2014-01-01

    Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343

  17. Value of the distant future: Model-independent results

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  18. Factors Involved in Juveniles' Decisions about Crime.

    ERIC Educational Resources Information Center

    Cimler, Edward; Beach, Lee Roy

    1981-01-01

    Investigated whether delinquency is the result of a rational decision. The Subjective Expected Utility (SEU) model from decision theory was used with male juvenile offenders (N=45) as the model of the decision process. Results showed that the SEU model predicted 62.7 percent of the subjects' decisions. (Author/RC)

  19. Improved workflow modelling using role activity diagram-based modelling with application to a radiology service case study.

    PubMed

    Shukla, Nagesh; Keast, John E; Ceglarek, Darek

    2014-10-01

    The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Improved simulation of regional CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS

    NASA Astrophysics Data System (ADS)

    Chen, Z. H.; Zhu, J.; Zeng, N.

    2013-08-01

    CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However, one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single station reflect all underlying processes of various scales. These processes usually cannot be fully resolved by model simulations at the grid points nearest the station due to lack of spatial or temporal resolution or missing processes in the model. In this study the stations in one region were grouped based on the amplitude and phase of the seasonal cycle at each station. The regionally averaged CO2 at all stations in one region represents the regional CO2 concentration of this region. The regional CO2 concentrations from model simulations and observations were used to evaluate the regional model results. The difference of the regional CO2 concentration between observation and modeled results reflects the uncertainty of the large-scale flux in the region where the grouped stations are. We compared the regional CO2 concentrations between model results with biospheric fluxes from the Carnegie-Ames-Stanford Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) models, and used observations from GLOBALVIEW-CO2 to evaluate the regional model results. The results show the largest difference of the regionally averaged values between simulations with fluxes from VEGAS and observations is less than 5 ppm for North American boreal, North American temperate, Eurasian boreal, Eurasian temperate and Europe, which is smaller than the largest difference between CASA simulations and observations (more than 5 ppm). There is still a large difference between two model results and observations for the regional CO2 concentration in the North Atlantic, Indian Ocean, and South Pacific tropics. The regionally averaged CO2 concentrations will be helpful for comparing CO2 concentrations from modeled results and observations and evaluating regional surface fluxes from different methods.

  1. Non-equilibrium synergistic effects in atmospheric pressure plasmas.

    PubMed

    Guo, Heng; Zhang, Xiao-Ning; Chen, Jian; Li, He-Ping; Ostrikov, Kostya Ken

    2018-03-19

    Non-equilibrium is one of the important features of an atmospheric gas discharge plasma. It involves complicated physical-chemical processes and plays a key role in various actual plasma processing. In this report, a novel complete non-equilibrium model is developed to reveal the non-equilibrium synergistic effects for the atmospheric-pressure low-temperature plasmas (AP-LTPs). It combines a thermal-chemical non-equilibrium fluid model for the quasi-neutral plasma region and a simplified sheath model for the electrode sheath region. The free-burning argon arc is selected as a model system because both the electrical-thermal-chemical equilibrium and non-equilibrium regions are involved simultaneously in this arc plasma system. The modeling results indicate for the first time that it is the strong and synergistic interactions among the mass, momentum and energy transfer processes that determine the self-consistent non-equilibrium characteristics of the AP-LTPs. An energy transfer process related to the non-uniform spatial distributions of the electron-to-heavy-particle temperature ratio has also been discovered for the first time. It has a significant influence for self-consistently predicting the transition region between the "hot" and "cold" equilibrium regions of an AP-LTP system. The modeling results would provide an instructive guidance for predicting and possibly controlling the non-equilibrium particle-energy transportation process in various AP-LTPs in future.

  2. Analyzing the Impact of a Data Analysis Process to Improve Instruction Using a Collaborative Model

    ERIC Educational Resources Information Center

    Good, Rebecca B.

    2006-01-01

    The Data Collaborative Model (DCM) assembles assessment literacy, reflective practices, and professional development into a four-component process. The sub-components include assessing students, reflecting over data, professional dialogue, professional development for the teachers, interventions for students based on data results, and re-assessing…

  3. Spatial perspectives in state-and-transition models: A missing link to land management?

    USDA-ARS?s Scientific Manuscript database

    Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...

  4. A numerical study of zone-melting process for the thermoelectric material of Bi2Te3

    NASA Astrophysics Data System (ADS)

    Chen, W. C.; Wu, Y. C.; Hwang, W. S.; Hsieh, H. L.; Huang, J. Y.; Huang, T. K.

    2015-06-01

    In this study, a numerical model has been established by employing a commercial software; ProCAST, to simulate the variation/distribution of temperature and the subsequent microstructure of Bi2Te3 fabricated by zone-melting technique. Then an experiment is conducted to measure the temperature variation/distribution during the zone-melting process to validate the numerical system. Also, the effects of processing parameters on crystallization microstructure such as moving speed and temperature of heater are numerically evaluated. In the experiment, the Bi2Te3 powder are filled into a 30mm diameter quartz cylinder and the heater is set to 800°C with a moving speed 12.5 mm/hr. A thermocouple is inserted in the Bi2Te3 powder to measure the temperature variation/distribution of the zone-melting process. The temperature variation/distribution measured by experiment is compared to the results of numerical simulation. The results show that our model and the experiment are well matched. Then the model is used to evaluate the crystal formation for Bi2Te3 with a 30mm diameter process. It's found that when the moving speed is slower than 17.5 mm/hr, columnar crystal is obtained. In the end, we use this model to predict the crystal formation of zone-melting process for Bi2Te3 with a 45 mm diameter. The results show that it is difficult to grow columnar crystal when the diameter comes to 45mm.

  5. Modeling of Ti-W Solidification Microstructures Under Additive Manufacturing Conditions

    NASA Astrophysics Data System (ADS)

    Rolchigo, Matthew R.; Mendoza, Michael Y.; Samimi, Peyman; Brice, David A.; Martin, Brian; Collins, Peter C.; LeSar, Richard

    2017-07-01

    Additive manufacturing (AM) processes have many benefits for the fabrication of alloy parts, including the potential for greater microstructural control and targeted properties than traditional metallurgy processes. To accelerate utilization of this process to produce such parts, an effective computational modeling approach to identify the relationships between material and process parameters, microstructure, and part properties is essential. Development of such a model requires accounting for the many factors in play during this process, including laser absorption, material addition and melting, fluid flow, various modes of heat transport, and solidification. In this paper, we start with a more modest goal, to create a multiscale model for a specific AM process, Laser Engineered Net Shaping (LENS™), which couples a continuum-level description of a simplified beam melting problem (coupling heat absorption, heat transport, and fluid flow) with a Lattice Boltzmann-cellular automata (LB-CA) microscale model of combined fluid flow, solute transport, and solidification. We apply this model to a binary Ti-5.5 wt pct W alloy and compare calculated quantities, such as dendrite arm spacing, with experimental results reported in a companion paper.

  6. A microscopic lane changing process model for multilane traffic

    NASA Astrophysics Data System (ADS)

    Lv, Wei; Song, Wei-guo; Liu, Xiao-dong; Ma, Jian

    2013-03-01

    In previous simulations lane-changing behavior is usually assumed as an instantaneous action. However, in real traffic, lane changing is a continuing process which can seriously affect the following cars. In this paper, a microscopic lane-changing process (LCP) model is clearly described. A new idea of simplifying the lane-changing process to the car-following framework is presented by controlling fictitious cars. To verify the model, the results of flow, lane-changing frequency, and single-car velocity are extracted from experimental observations and are compared with corresponding simulation. It is found that the LCP model agrees well with actual traffic flow and lane-changing behaviors may induce a 12%-18% reduction of traffic flow. The results also reflect that most of the drivers on the two roads in a city are conservative but not aggressive to change lanes. Investigation of lane-changing frequency shows that the largest lane-changing frequency occurs at a medium density range from 15 vehs km lane to 35 vehs km lane. It also implies that the lane-changing process might strengthen velocity variation at medium density and weaken velocity variation at high density. It is hoped that the idea of this study may be helpful to promote the modeling and simulation study of traffic flow.

  7. Efficient development and processing of thermal math models of very large space truss structures

    NASA Technical Reports Server (NTRS)

    Warren, Andrew H.; Arelt, Joseph E.; Lalicata, Anthony L.

    1993-01-01

    As the spacecraft moves along the orbit, the truss members are subjected to direct and reflected solar, albedo and planetary infra-red (IR) heating rates, as well as IR heating and shadowing from other spacecraft components. This is a transient process with continuously changing heating loads and the shadowing effects. The resulting nonuniform temperature distribution may cause nonuniform thermal expansion, deflection and stress in the truss elements, truss warping and thermal distortions. There are three challenges in the thermal-structural analysis of the large truss structures. The first is the development of the thermal and structural math models, the second - model processing, and the third - the data transfer between the models. All three tasks require considerable time and computer resources to be done because of a very large number of components involved. To address these challenges a series of techniques of automated thermal math modeling and efficient processing of very large space truss structures were developed. In the process the finite element and finite difference methods are interfaced. A very substantial reduction of the quantity of computations was achieved while assuring a desired accuracy of the results. The techniques are illustrated on the thermal analysis of a segment of the Space Station main truss.

  8. Elucidation of Heterogeneous Processes Controlling Boost Phase Signatures

    DTIC Science & Technology

    1990-09-12

    three year research program to develop efficient theoretical methods to study collisional processes involved in radiative signature modeling . The...Marlboro, MD 20772 I. Statement of Problem For strategic defense, it is important to be able to effectively model radiative signaturesl arising from...Thus our computational work was on problems or models for which exact results for making comparisons were available. Our key validations were

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Moon, T.J.; Howell, J.R.

    This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less

  10. A model of the human observer and decision maker

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1981-01-01

    The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.

  11. An extended car-following model considering the acceleration derivative in some typical traffic environments

    NASA Astrophysics Data System (ADS)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  12. Reduced-order model for dynamic optimization of pressure swing adsorption processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, A.; Biegler, L.; Zitney, S.

    2007-01-01

    Over the past decades, pressure swing adsorption (PSA) processes have been widely used as energy-efficient gas and liquid separation techniques, especially for high purity hydrogen purification from refinery gases. The separation processes are based on solid-gas equilibrium and operate under periodic transient conditions. Models for PSA processes are therefore multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep concentrations and temperature fronts moving with time. As a result, the optimization of such systems for either designmore » or operation represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. The study develops a reduced-order model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. Initially, a representative ensemble of solutions of the dynamic PDE system is constructed by solving a higher-order discretization of the model using the method of lines, a two-stage approach that discretizes the PDEs in space and then integrates the resulting DAEs over time. Next, the ROM method applies the Karhunen-Loeve expansion to derive a small set of empirical eigenfunctions (POD modes) which are used as basis functions within a Galerkin's projection framework to derive a low-order DAE system that accurately describes the dominant dynamics of the PDE system. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization before and making optimization problem computationally-efficient. The method has been applied to the dynamic coupled PDE-based model of a two-bed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The gas-phase mole fraction, solid-state loading and temperature profiles from the low-order ROM and from the high-order simulations have been compared. Moreover, the profiles for a different set of inputs and parameter values fed to the same ROM were compared with the accurate profiles from the high-order simulations. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes. Moreover, deviations from the ROM for different set of inputs and parameters suggest that a recalibration of the model is required for the optimization studies. Results for these will also be presented with the aforementioned results.« less

  13. An Overview of NASA's IM&S Verification and Validation Process Plan and Specification for Space Exploration

    NASA Technical Reports Server (NTRS)

    Gravitz, Robert M.; Hale, Joseph

    2006-01-01

    NASA's Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model's fidelity, credibility, and quality. This information will allow the decision-maker to understand the risks involved in using a model's results in the decision-making process. This presentation will discuss NASA's approach for verification and validation (V&V) of its models or simulations supporting space exploration. This presentation will describe NASA's V&V process and the associated M&S verification and validation (V&V) activities required to support the decision-making process. The M&S V&V Plan and V&V Report templates for ESMD will also be illustrated.

  14. Modeling of acetone biofiltration process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsiu-Mu Tang; Shyh-Jye Hwang; Wen-Chuan Wang

    1996-12-31

    The objective of this research was to investigate the kinetic behavior of the biofiltration process for the removal of acetone 41 which was used as a model compound for highly water soluble gas pollutants. A mathematical model was developed by taking into account diffusion and biodegradation of acetone and oxygen in the biofilm, mass transfer resistance in the gas film, and flow pattern of the bulk gas phase. The simulated results obtained by the proposed model indicated that mass transfer resistance in the gas phase was negligible for this biofiltration process. Analysis of the relative importance of various rate stepsmore » indicated that the overall acetone removal process was primarily limited by the oxygen diffusion rate. 11 refs., 6 figs., 1 tab.« less

  15. Modeling laser velocimeter signals as triply stochastic Poisson processes

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1976-01-01

    Previous models of laser Doppler velocimeter (LDV) systems have not adequately described dual-scatter signals in a manner useful for analysis and simulation of low-level photon-limited signals. At low photon rates, an LDV signal at the output of a photomultiplier tube is a compound nonhomogeneous filtered Poisson process, whose intensity function is another (slower) Poisson process with the nonstationary rate and frequency parameters controlled by a random flow (slowest) process. In the present paper, generalized Poisson shot noise models are developed for low-level LDV signals. Theoretical results useful in detection error analysis and simulation are presented, along with measurements of burst amplitude statistics. Computer generated simulations illustrate the difference between Gaussian and Poisson models of low-level signals.

  16. Multiobjective optimization and multivariable control of the beer fermentation process with the use of evolutionary algorithms.

    PubMed

    Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E

    2004-04-01

    This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.

  17. Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.

    PubMed

    Hao, J T; Li, M L; Tang, F L

    2008-01-01

    Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.

  18. Optimizing pulsed Nd:YAG laser beam welding process parameters to attain maximum ultimate tensile strength for thin AISI316L sheet using response surface methodology and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Torabi, Amir; Kolahan, Farhad

    2018-07-01

    Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.

  19. A Prototype for the Support of Integrated Software Process Development and Improvement

    NASA Astrophysics Data System (ADS)

    Porrawatpreyakorn, Nalinpat; Quirchmayr, Gerald; Chutimaskul, Wichian

    An efficient software development process is one of key success factors for quality software. Not only can the appropriate establishment but also the continuous improvement of integrated project management and of the software development process result in efficiency. This paper hence proposes a software process maintenance framework which consists of two core components: an integrated PMBOK-Scrum model describing how to establish a comprehensive set of project management and software engineering processes and a software development maturity model advocating software process improvement. Besides, a prototype tool to support the framework is introduced.

  20. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y.S. Wu

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used tomore » support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas chemistry, mineral dissolution/precipitation, and the resulting impact to UZ hydrologic properties, flow and transport. The mountain-scale THM model addresses changes in permeability due to mechanical and thermal disturbances in stratigraphic units above and below the repository host rock. The THM model focuses on evaluating the changes in UZ flow fields arising out of thermal stress and rock deformation during and after the thermal period (the period during which temperatures in the mountain are significantly higher than ambient temperatures).« less

Top