The sensitivity of ecosystem service models to choices of input data and spatial resolution
Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge Sun
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10â15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...
NASA Astrophysics Data System (ADS)
Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.
2016-02-01
Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.
Comparing toxicologic and epidemiologic studies: methylene chloride--a case study.
Stayner, L T; Bailer, A J
1993-12-01
Exposure to methylene chloride induces lung and liver cancers in mice. The mouse bioassay data have been used as the basis for several cancer risk assessments. The results from epidemiologic studies of workers exposed to methylene chloride have been mixed with respect to demonstrating an increased cancer risk. The results from a negative epidemiologic study of Kodak workers have been used by two groups of investigators to test the predictions from the EPA risk assessment models. These two groups used very different approaches to this problem, which resulted in opposite conclusions regarding the consistency between the animal model predictions and the Kodak study results. The results from the Kodak study are used to test the predictions from OSHA's multistage models of liver and lung cancer risk. Confidence intervals for the standardized mortality ratios (SMRs) from the Kodak study are compared with the predicted confidence intervals derived from OSHA's risk assessment models. Adjustments for the "healthy worker effect," differences in length of follow-up, and dosimetry between animals and humans were incorporated into these comparisons. Based on these comparisons, we conclude that the negative results from the Kodak study are not inconsistent with the predictions from OSHA's risk assessment model.
Integrating Human Factors into Crew Exploration Vehicle (CEV) Design
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Holden, Kritina; Baggerman, Susan; Campbell, Paul
2007-01-01
The purpose of this design process is to apply Human Engineering (HE) requirements and guidelines to hardware/software and to provide HE design, analysis and evaluation of crew interfaces. The topics include: 1) Background/Purpose; 2) HE Activities; 3) CASE STUDY: Net Habitable Volume (NHV) Study; 4) CASE STUDY: Human Modeling Approach; 5) CASE STUDY: Human Modeling Results; 6) CASE STUDY: Human Modeling Conclusions; 7) CASE STUDY: Human-in-the-Loop Evaluation Approach; 8) CASE STUDY: Unsuited Evaluation Results; 9) CASE STUDY: Suited Evaluation Results; 10) CASE STUDY: Human-in-the-Loop Evaluation Conclusions; 11) Near-Term Plan; and 12) In Conclusion
Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no
Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimentalmore » data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.« less
Elçi, A; Karadaş, D; Fistikoğlu, O
2010-01-01
A numerical modeling case study of groundwater flow in a diffuse pollution prone area is presented. The study area is located within the metropolitan borders of the city of Izmir, Turkey. This groundwater flow model was unconventional in the application since the groundwater recharge parameter in the model was estimated using a lumped, transient water-budget based precipitation-runoff model that was executed independent of the groundwater flow model. The recharge rate obtained from the calibrated precipitation-runoff model was used as input to the groundwater flow model, which was eventually calibrated to measured water table elevations. Overall, the flow model results were consistent with field observations and model statistics were satisfactory. Water budget results of the model revealed that groundwater recharge comprised about 20% of the total water input for the entire study area. Recharge was the second largest component in the budget after leakage from streams into the subsurface. It was concluded that the modeling results can be further used as input for contaminant transport modeling studies in order to evaluate the vulnerability of water resources of the study area to diffuse pollution.
Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling
NASA Technical Reports Server (NTRS)
Campbell, Kenyth
2012-01-01
The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
Peñaloza-Ramos, Maria Cristina; Jowett, Sue; Sutton, Andrew John; McManus, Richard J; Barton, Pelham
2018-03-01
Management of hypertension can lead to significant reductions in blood pressure, thereby reducing the risk of cardiovascular disease. Modeling the course of cardiovascular disease is not without complications, and uncertainty surrounding the structure of a model will almost always arise once a choice of a model structure is defined. To provide a practical illustration of the impact on the results of cost-effectiveness of changing or adapting model structures in a previously published cost-utility analysis of a primary care intervention for the management of hypertension Targets and Self-Management for the Control of Blood Pressure in Stroke and at Risk Groups (TASMIN-SR). The case study assessed the structural uncertainty arising from model structure and from the exclusion of secondary events. Four alternative model structures were implemented. Long-term cost-effectiveness was estimated and the results compared with those from the TASMIN-SR model. The main cost-effectiveness results obtained in the TASMIN-SR study did not change with the implementation of alternative model structures. Choice of model type was limited to a cohort Markov model, and because of the lack of epidemiological data, only model 4 captured structural uncertainty arising from the exclusion of secondary events in the case study model. The results of this study indicate that the main conclusions drawn from the TASMIN-SR model of cost-effectiveness were robust to changes in model structure and the inclusion of secondary events. Even though one of the models produced results that were different to those of TASMIN-SR, the fact that the main conclusions were identical suggests that a more parsimonious model may have sufficed. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick
2013-01-01
Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez, R.R.; McLellan, T.M.; Withey, W.R.
This report represents the results of TTCP-UTP6 efforts on modeling aspects when chemical protective ensembles are worn which need to be considered in warm environments. Since 1983, a significant data base has been collected using human experimental studies and wide clothing systems from which predictive modeling equations have been developed with individuals working in temperate and hot environments, but few comparisons of the -- results from various model outputs have ever been carried out. This initial comparison study was part of a key technical area (KIA) project for The Technical Cooperation Program (TTCP) UTP-6 working party. A modeling workshop wasmore » conducted in Toronto, Canada on 9-10 June 1994 to discuss the data reduction and results acquired in an initial clothing analysis study of TTCP using various chemical protective garments. To our knowledge, no comprehensive study to date has ever focused on comparing experimental results using an international standardized heat stress procedure matched to physiological outputs from various model predictions in individuals dressed in chemical protective clothing systems. This is the major focus of this TTCP key technical study. This technical report covers one aspect of the working party`s results.« less
Animal models for microbicide safety and efficacy testing.
Veazey, Ronald S
2013-07-01
Early studies have cast doubt on the utility of animal models for predicting success or failure of HIV-prevention strategies, but results of multiple human phase 3 microbicide trials, and interrogations into the discrepancies between human and animal model trials, indicate that animal models were, and are, predictive of safety and efficacy of microbicide candidates. Recent studies have shown that topically applied vaginal gels, and oral prophylaxis using single or combination antiretrovirals are indeed effective in preventing sexual HIV transmission in humans, and all of these successes were predicted in animal models. Further, prior discrepancies between animal and human results are finally being deciphered as inadequacies in study design in the model, or quite often, noncompliance in human trials, the latter being increasingly recognized as a major problem in human microbicide trials. Successful microbicide studies in humans have validated results in animal models, and several ongoing studies are further investigating questions of tissue distribution, duration of efficacy, and continued safety with repeated application of these, and other promising microbicide candidates in both murine and nonhuman primate models. Now that we finally have positive correlations with prevention strategies and protection from HIV transmission, we can retrospectively validate animal models for their ability to predict these results, and more importantly, prospectively use these models to select and advance even safer, more effective, and importantly, more durable microbicide candidates into human trials.
Global attractivity of an almost periodic N-species nonlinear ecological competitive model
NASA Astrophysics Data System (ADS)
Xia, Yonghui; Han, Maoan; Huang, Zhenkun
2008-01-01
By using comparison theorem and constructing suitable Lyapunov functional, we study the following almost periodic nonlinear N-species competitive Lotka-Volterra model: A set of sufficient conditions is obtained for the existence and global attractivity of a unique positive almost periodic solution of the above model. As applications, some special competition models are studied again, our new results improve and generalize former results. Examples and their simulations show the feasibility of our main results.
Sarnadskiĭ, V N
2007-01-01
The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.
Delta 2 Explosion Plume Analysis Report
NASA Technical Reports Server (NTRS)
Evans, Randolph J.
2000-01-01
A Delta II rocket exploded seconds after liftoff from Cape Canaveral Air Force Station (CCAFS) on 17 January 1997. The cloud produced by the explosion provided an opportunity to evaluate the models which are used to track potentially toxic dispersing plumes and clouds at CCAFS. The primary goal of this project was to conduct a case study of the dispersing cloud and the models used to predict the dispersion resulting from the explosion. The case study was conducted by comparing mesoscale and dispersion model results with available meteorological and plume observations. This study was funded by KSC under Applied Meteorology Unit (AMU) option hours. The models used in the study are part of the Eastern Range Dispersion Assessment System (ERDAS) and include the Regional Atmospheric Modeling System (RAMS), HYbrid Particle And Concentration Transport (HYPACT), and Rocket Exhaust Effluent Dispersion Model (REEDM). The primary observations used for explosion cloud verification of the study were from the National Weather Service's Weather Surveillance Radar 1988-Doppler (WSR-88D). Radar reflectivity measurements of the resulting cloud provided good estimates of the location and dimensions of the cloud over a four-hour period after the explosion. The results indicated that RAMS and HYPACT models performed reasonably well. Future upgrades to ERDAS are recommended.
Modeling the Inner Magnetosphere: Radiation Belts, Ring Current, and Composition
NASA Technical Reports Server (NTRS)
Glocer, Alex
2011-01-01
The space environment is a complex system defined by regions of differing length scales, characteristic energies, and physical processes. It is often difficult, or impossible, to treat all aspects of the space environment relative to a particular problem with a single model. In our studies, we utilize several models working in tandem to examine this highly interconnected system. The methodology and results will be presented for three focused topics: 1) Rapid radiation belt electron enhancements, 2) Ring current study of Energetic Neutral Atoms (ENAs), Dst, and plasma composition, and 3) Examination of the outflow of ionospheric ions. In the first study, we use a coupled MHD magnetosphere - kinetic radiation belt model to explain recent Akebono/RDM observations of greater than 2.5 MeV radiation belt electron enhancements occurring on timescales of less than a few hours. In the second study, we present initial results of a ring current study using a newly coupled kinetic ring current model with an MHD magnetosphere model. Results of a dst study for four geomagnetic events are shown. Moreover, direct comparison with TWINS ENA images are used to infer the role that composition plays in the ring current. In the final study, we directly model the transport of plasma from the ionosphere to the magnetosphere. We especially focus on the role of photoelectrons and and wave-particle interactions. The modeling methodology for each of these studies will be detailed along with the results.
NASA Astrophysics Data System (ADS)
Koksbang, S. M.
2017-03-01
Light propagation in two Swiss-cheese models based on anisotropic Szekeres structures is studied and compared with light propagation in Swiss-cheese models based on the Szekeres models' underlying Lemaitre-Tolman-Bondi models. The study shows that the anisotropy of the Szekeres models has only a small effect on quantities such as redshift-distance relations, projected shear and expansion rate along individual light rays. The average angular diameter distance to the last scattering surface is computed for each model. Contrary to earlier studies, the results obtained here are (mostly) in agreement with perturbative results. In particular, a small negative shift, δ DA≔D/A-DA ,b g DA ,b g , in the angular diameter distance is obtained upon line-of-sight averaging in three of the four models. The results are, however, not statistically significant. In the fourth model, there is a small positive shift which has an especially small statistical significance. The line-of-sight averaged inverse magnification at z =1100 is consistent with 1 to a high level of confidence for all models, indicating that the area of the surface corresponding to z =1100 is close to that of the background.
Empirical studies of software design: Implications for SSEs
NASA Technical Reports Server (NTRS)
Krasner, Herb
1988-01-01
Implications for Software Engineering Environments (SEEs) are presented in viewgraph format for characteristics of projects studied; significant problems and crucial problem areas in software design for large systems; layered behavioral model of software processes; implications of field study results; software project as an ecological system; results of the LIFT study; information model of design exploration; software design strategies; results of the team design study; and a list of publications.
Yamazaki, Shinji; Johnson, Theodore R; Smith, Bill J
2015-10-01
An orally available multiple tyrosine kinase inhibitor, crizotinib (Xalkori), is a CYP3A substrate, moderate time-dependent inhibitor, and weak inducer. The main objectives of the present study were to: 1) develop and refine a physiologically based pharmacokinetic (PBPK) model of crizotinib on the basis of clinical single- and multiple-dose results, 2) verify the crizotinib PBPK model from crizotinib single-dose drug-drug interaction (DDI) results with multiple-dose coadministration of ketoconazole or rifampin, and 3) apply the crizotinib PBPK model to predict crizotinib multiple-dose DDI outcomes. We also focused on gaining insights into the underlying mechanisms mediating crizotinib DDIs using a dynamic PBPK model, the Simcyp population-based simulator. First, PBPK model-predicted crizotinib exposures adequately matched clinically observed results in the single- and multiple-dose studies. Second, the model-predicted crizotinib exposures sufficiently matched clinically observed results in the crizotinib single-dose DDI studies with ketoconazole or rifampin, resulting in the reasonably predicted fold-increases in crizotinib exposures. Finally, the predicted fold-increases in crizotinib exposures in the multiple-dose DDI studies were roughly comparable to those in the single-dose DDI studies, suggesting that the effects of crizotinib CYP3A time-dependent inhibition (net inhibition) on the multiple-dose DDI outcomes would be negligible. Therefore, crizotinib dose-adjustment in the multiple-dose DDI studies could be made on the basis of currently available single-dose results. Overall, we believe that the crizotinib PBPK model developed, refined, and verified in the present study would adequately predict crizotinib oral exposures in other clinical studies, such as DDIs with weak/moderate CYP3A inhibitors/inducers and drug-disease interactions in patients with hepatic or renal impairment. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.
2016-09-28
previous research and modeling results. The OMS and Perception Toolbox were used to perform a case study of an F18 mishap. Model results imply that...request documents from DTIC. Change of Address Organizations receiving reports from the U.S. Army Aeromedical Research Laboratory on automatic...54 Coriolis head movement during a coordinated turn. .............................................55 Case Study
When the Test of Mediation is More Powerful than the Test of the Total Effect
O'Rourke, Holly P.; MacKinnon, David P.
2014-01-01
Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase power has not been investigated. First, a study compared analytical power of the mediated effect to the total effect in a single mediator model to identify the situations in which the inclusion of one mediator increased statistical power. Results from the first study indicated that including a mediator increased statistical power in small samples with large coefficients and in large samples with small coefficients, and when coefficients were non-zero and equal across models. Next, a study identified conditions where power was greater for the test of the total mediated effect compared to the test of the total effect in the parallel two mediator model. Results indicated that including two mediators increased power in small samples with large coefficients and in large samples with small coefficients, the same pattern of results found in the first study. Finally, a study assessed analytical power for a sequential (three-path) two mediator model and compared power to detect the three-path mediated effect to power to detect both the test of the total effect and the test of the mediated effect for the single mediator model. Results indicated that the three-path mediated effect had more power than the mediated effect from the single mediator model and the test of the total effect. Practical implications of these results for researchers are then discussed. PMID:24903690
Fellnhofer, Katharina
2017-01-01
Relying on Bandura's (1986) social learning theory, Ajzen's (1988) theory of planned behaviour (TPB), and Dyer's (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners' entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education.
Gilmartin, Heather M.; Sousa, Karen H.; Battaglia, Catherine
2016-01-01
Background The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Objectives Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. Methods A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness-Refined study. The sample was randomly split into exploration and validation subsets. Results The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01), and CLABSIs (reflective = −.28; composite = −.25; p =.01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. Discussion There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled, or with directional ambiguity, to increase transparency and bring confidence to study findings. PMID:27579507
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason P.; Modenese, Luca; Reinbolt, Jeffrey A.; Thelen, Darryl G.; Umberger, Brian R.
2016-01-01
Objective The overall goal of this document is to demonstrate that dissemination of models and analyses for assessing the reproducibility of simulation results can be incorporated in the scientific review process in biomechanics. Methods As part of a special issue on model sharing and reproducibility in IEEE Transactions on Biomedical Engineering, two manuscripts on computational biomechanics were submitted: A. Rajagopal et al., IEEE Trans. Biomed. Eng., 2016 and A. Schmitz and D. Piovesan, IEEE Trans. Biomed. Eng., 2016. Models used in these studies were shared with the scientific reviewers and the public. In addition to the standard review of the manuscripts, the reviewers downloaded the models and performed simulations that reproduced results reported in the studies. Results There was general agreement between simulation results of the authors and those of the reviewers. Discrepancies were resolved during the necessary revisions. The manuscripts and instructions for download and simulation were updated in response to the reviewers’ feedback; changes that may otherwise have been missed if explicit model sharing and simulation reproducibility analysis were not conducted in the review process. Increased burden on the authors and the reviewers, to facilitate model sharing and to repeat simulations, were noted. Conclusion When the authors of computational biomechanics studies provide access to models and data, the scientific reviewers can download and thoroughly explore the model, perform simulations, and evaluate simulation reproducibility beyond the traditional manuscript-only review process. Significance Model sharing and reproducibility analysis in scholarly publishing will result in a more rigorous review process, which will enhance the quality of modeling and simulation studies and inform future users of computational models. PMID:28072567
NASA Astrophysics Data System (ADS)
Ma, W.; Ma, Y.; Hu, Z.; Zhong, L.
2017-12-01
In this study, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. Firstly our field observation sites will be introduced based on ITPCAS (Institute of Tibetan Plateau Research, Chinese Academy of Sciences). Then, a land-atmosphere model was initialized by ingesting AMSR-E products, and the results were compared with the default model configuration and with in situ long-term CAMP/Tibet observations. The differences between the AMSR-E initialized model runs with the default model configuration and in situ data showed an apparent inconsistency in the model-simulated land surface heat fluxes. The results showed that the soil moisture was sensitive to the specific model configuration. To evaluate and verify the model stability, a long-term modeling study with AMSR-E soil moisture data ingestion was performed. Based on test simulations, AMSR-E data were assimilated into an atmospheric model for July and August 2007. The results showed that the land surface fluxes agreed well with both the in situ data and the results of the default model configuration. Therefore, the simulation can be used to retrieve land surface heat fluxes from an atmospheric model over the Tibetan Plateau.
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Characteristics of 3-D transport simulations of the stratosphere and mesosphere
NASA Technical Reports Server (NTRS)
Fairlie, T. D. A.; Siskind, D. E.; Turner, R. E.; Fisher, M.
1992-01-01
A 3D mechanistic, primitive-equation model of the stratosphere and mesosphere is coupled to an offline spectral transport model. The dynamics model is initialized with and forced by observations so that the coupled models may be used to study specific episodes. Results are compared with those obtained by transport online in the dynamics model. Although some differences are apparent, the results suggest that coupling of the models to a comprehensive photochemical package will provide a useful tool for studying the evolution of constituents in the middle atmosphere during specific episodes.
Safety analytics for integrating crash frequency and real-time risk modeling for expressways.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-01
To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.
Staffaroni, Adam M; Eng, Megan E; Moses, James A; Zeiner, Harriet Katz; Wickham, Robert E
2018-05-01
A growing body of research supports the validity of 5-factor models for interpreting the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). The majority of these studies have utilized the WAIS-IV normative or clinical sample, the latter of which differs in its diagnostic composition from the referrals seen at outpatient neuropsychology clinics. To address this concern, 2 related studies were conducted on a sample of 322 American military Veterans who were referred for outpatient neuropsychological assessment. In Study 1, 4 hierarchical models with varying indicator configurations were evaluated: 3 extant 5-factor models from the literature and the traditional 4-factor model. In Study 2, we evaluated 3 variations in correlation structure in the models from Study 1: indirect hierarchical (i.e., higher-order g), bifactor (direct hierarchical), and oblique models. The results from Study 1 suggested that both 4- and 5-factor models showed acceptable fit. The results from Study 2 showed that bifactor and oblique models offer improved fit over the typically specified indirect hierarchical model, and the oblique models outperformed the orthogonal bifactor models. An exploratory analysis found improved fit when bifactor models were specified with oblique rather than orthogonal latent factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
RESULTS FROM THE NORTH AMERICAN MERCURY MODEL INTER-COMPARISON STUDY (NAMMIS)
A North American Mercury Model Intercomparison Study (NAMMIS) has been conducted to build upon the findings from previous mercury model intercomparison in Europe. In the absence of mercury measurement networks sufficient for model evaluation, model developers continue to rely on...
Developing a new solar radiation estimation model based on Buckingham theorem
NASA Astrophysics Data System (ADS)
Ekici, Can; Teke, Ismail
2018-06-01
While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.
Fellnhofer, Katharina
2017-01-01
Relying on Bandura’s (1986) social learning theory, Ajzen’s (1988) theory of planned behaviour (TPB), and Dyer’s (1994) model of entrepreneurial careers, this study aims to highlight the potential of entrepreneurial role models to entrepreneurship education. The results suggest that entrepreneurial courses would greatly benefit from real-life experiences, either positive or negative. The results of regression analysis based on 426 individuals, primarily from Austria, Finland, and Greece, show that role models increase learners’ entrepreneurial perceived behaviour control (PBC) by increasing their self-efficacy. This study can inform the research and business communities and governments about the importance of integrating entrepreneurs into education to stimulate entrepreneurial PBC. This study is the first of its kind using its approach, and its results warrant more in-depth studies of storytelling by entrepreneurial role models in the context of multimedia entrepreneurship education. PMID:29104604
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.
Heart rate prediction for coronary artery disease patients (CAD): Results of a clinical pilot study.
Müller-von Aschwege, Frerk; Workowski, Anke; Willemsen, Detlev; Müller, Sebastian M; Hein, Andreas
2015-01-01
This paper describes the results of a pilot study with cardiac patients based on information that can be derived from a smartphone. The idea behind the study is to design a model for estimating the heart rate of a patient before an outdoor walking session for track planning, as well as using the model for guidance during an outdoor session. The model allows estimation of the heart rate several minutes in advance to guide the patient and avoid overstrain before its occurrence. This paper describes the first results of the clinical pilot study with cardiac patients taking β-blockers. 9 patients have been tested on a treadmill and during three outdoor sessions each. The results have been derived and three levels of improvement have been tested by cross validation. The overall result is an average Median Absolute Deviation (MAD) of 4.26 BPM between measured heart rate and smartphone sensor based model estimation.
Modeling sediment transport after ditch network maintenance of a forested peatland
NASA Astrophysics Data System (ADS)
Haahti, K.; Marttila, H.; Warsta, L.; Kokkonen, T.; Finér, L.; Koivusalo, H.
2016-11-01
Elevated suspended sediment (SS) loads released from peatlands after drainage operations and the resulting negative effect on the ecological status of the receiving water bodies have been widely recognized. Understanding the processes controlling erosion and sediment transport within the ditch network forms a prerequisite for adequate sediment control. While numerous experimental studies have been reported in this field, model based assessments are rare. This study presents a modeling approach to investigate sediment transport in a peatland ditch network. The transport model describes bed erosion, rain-induced bank erosion, floc deposition, and consolidation of the bed. Coupled to a distributed hydrological model, sediment transport was simulated in a 5.2 ha forestry-drained peatland catchment for 2 years after ditch cleaning. Comparing simulation results to measured SS concentrations suggested that the loose peat material, produced during excavation, contributed markedly to elevated SS concentrations immediately after ditch cleaning. Both snowmelt and summer rainstorms contributed critically to annual loads. Springtime peat erosion during snowmelt was driven by ditch flow whereas during summer rainfalls, bank erosion by raindrop impact was identified as an important process. Relating modeling results to observed spatial topographic changes in the ditch network was challenging and the results were difficult to verify. Nevertheless, the model has potential to identify risk areas for erosion. The results demonstrate that modeling is effective in separating the importance of different processes and complements pure experimental approaches. Modeling results can aid planning and designing efficient sediment control measures and guide the focus of experimental studies.
Modelling of thermal stresses in bearing steel structure generated by electrical current impulses
NASA Astrophysics Data System (ADS)
Birjukovs, M.; Jakovics, A.; Holweger, W.
2018-05-01
This work is the study of one particular candidate for white etching crack (WEC) initiation mechanism in wind turbine gearbox bearings: discharge current impulses flowing through bearing steel with associated thermal stresses and material fatigue. Using data/results from previously published works, the authors develop a series of models that are utilized to simulate these processes under various conditions/local microstructure configurations, as well as to verify the results of the previous numerical studies. Presented models show that the resulting stresses are several orders of magnitude below the fatigue limit/yield strength for the parameters used herein. Results and analysis of models provided by Scepanskis, M. et al. also indicate that certain effects predicted in their previous work resulted from a physically unfounded assumption about material thermodynamic properties and numerical model implementation issues.
MIXING STUDY FOR JT-71/72 TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2013-11-26
All modeling calculations for the mixing operations of miscible fluids contained in HBLine tanks, JT-71/72, were performed by taking a three-dimensional Computational Fluid Dynamics (CFD) approach. The CFD modeling results were benchmarked against the literature results and the previous SRNL test results to validate the model. Final performance calculations were performed by using the validated model to quantify the mixing time for the HB-Line tanks. The mixing study results for the JT-71/72 tanks show that, for the cases modeled, the mixing time required for blending of the tank contents is no more than 35 minutes, which is well below 2.5more » hours of recirculation pump operation. Therefore, the results demonstrate the adequacy of 2.5 hours’ mixing time of the tank contents by one recirculation pump to get well mixed.« less
NASA Astrophysics Data System (ADS)
Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan
2017-07-01
The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selecting global climate models for regional climate change studies
Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.
2009-01-01
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652
Finite Element Modeling of the Buckling Response of Sandwich Panels
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Moore, David F.; Knight, Norman F., Jr.; Rankin, Charles C.
2002-01-01
A comparative study of different modeling approaches for predicting sandwich panel buckling response is described. The study considers sandwich panels with anisotropic face sheets and a very thick core. Results from conventional analytical solutions for sandwich panel overall buckling and face-sheet-wrinkling type modes are compared with solutions obtained using different finite element modeling approaches. Finite element solutions are obtained using layered shell element models, with and without transverse shear flexibility, layered shell/solid element models, with shell elements for the face sheets and solid elements for the core, and sandwich models using a recently developed specialty sandwich element. Convergence characteristics of the shell/solid and sandwich element modeling approaches with respect to in-plane and through-the-thickness discretization, are demonstrated. Results of the study indicate that the specialty sandwich element provides an accurate and effective modeling approach for predicting both overall and localized sandwich panel buckling response. Furthermore, results indicate that anisotropy of the face sheets, along with the ratio of principle elastic moduli, affect the buckling response and these effects may not be represented accurately by analytical solutions. Modeling recommendations are also provided.
Bossert, Jennifer M.; Marchant, Nathan J.; Calu, Donna J.; Shaham, Yavin
2013-01-01
Background and Rationale Results from many clinical studies suggest that drug relapse and craving are often provoked by acute exposure to the self-administered drug or related drugs, drug-associated cues or contexts, or certain stressors. During the last two decades, this clinical scenario has been studied in laboratory animals by using the reinstatement model. In this model, reinstatement of drug seeking by drug priming, drug cues or contexts, or certain stressors is assessed following drug self-administration training and subsequent extinction of the drug-reinforced responding. Objective In this review, we first summarize recent (2009-present) neurobiological findings from studies using the reinstatement model. We then discuss emerging research topics, including the impact of interfering with putative reconsolidation processes on cue- and context-induced reinstatement of drug seeking, and similarities and differences in mechanisms of reinstatement across drug classes. We conclude by discussing results from recent human studies that were inspired by results from rat studies using the reinstatement model. Conclusions Main conclusions from the studies reviewed highlight: (1) the ventral subiculum and lateral hypothalamus as emerging brain areas important for reinstatement of drug seeking, (2) the existence of differences in brain mechanisms controlling reinstatement of drug seeking across drug classes, (3) the utility of the reinstatement model for assessing the effect of reconsolidation-related manipulations on cue-induced drug seeking, and (4) the encouraging pharmacological concordance between results from rat studies using the reinstatement model and human laboratory studies on cue- and stress-induced drug craving. PMID:23685858
Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi
2017-01-01
Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814
NASA Astrophysics Data System (ADS)
Javad Azarhoosh, Mohammad; Halladj, Rouein; Askari, Sima
2017-10-01
In this study, a new kinetic model for methanol to light olefins (MTO) reactions over a hierarchical SAPO-34 catalyst using the Langmuir-Hinshelwood-Hougen-Watson (LHHW) mechanism was presented and the kinetic parameters was obtained using a genetic algorithm (GA) and genetic programming (GP). Several kinetic models for the MTO reactions have been presented. However, due to the complexity of the reactions, most reactions are considered lumped and elementary, which cannot be deemed a completely accurate kinetic model of the process. Therefore, in this study, the LHHW mechanism is presented as kinetic models of MTO reactions. Because of the non-linearity of the kinetic models and existence of many local optimal points, evolutionary algorithms (GA and GP) are used in this study to estimate the kinetic parameters in the rate equations. Via the simultaneous connection of the code related to modelling the reactor and the GA and GP codes in the MATLAB R2013a software, optimization of the kinetic models parameters was performed such that the least difference between the results from the kinetic models and experiential results was obtained and the best kinetic parameters of MTO process reactions were achieved. A comparison of the results from the model with experiential results showed that the present model possesses good accuracy.
Modeling and experimental study of resistive switching in vertically aligned carbon nanotubes
NASA Astrophysics Data System (ADS)
Ageev, O. A.; Blinov, Yu F.; Ilina, M. V.; Ilin, O. I.; Smirnov, V. A.
2016-08-01
Model of the resistive switching in vertically aligned carbon nanotube (VA CNT) taking into account the processes of deformation, polarization and piezoelectric charge accumulation have been developed. Origin of hysteresis in VA CNT-based structure is described. Based on modeling results the VACNTs-based structure has been created. The ration resistance of high-resistance to low-resistance states of the VACNTs-based structure amounts 48. The correlation the modeling results with experimental studies is shown. The results can be used in the development nanoelectronics devices based on VA CNTs, including the nonvolatile resistive random-access memory.
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit
Wong, Rowena Syn Yin; Ismail, Noor Azina
2016-01-01
Background and Objectives There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. Methods This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. Results The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Conclusion Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes. PMID:27007413
Fakhim, Babak; Hassani, Abolfazl; Rashidi, Alimorad; Ghodousi, Parviz
2013-01-01
In this study the feasibility of using the artificial neural networks modeling in predicting the effect of MWCNT on amount of cement hydration products and improving the quality of cement hydration products microstructures of cement paste was investigated. To determine the amount of cement hydration products thermogravimetric analysis was used. Two critical parameters of TGA test are PHPloss and CHloss. In order to model the TGA test results, the ANN modeling was performed on these parameters separately. In this study, 60% of data are used for model calibration and the remaining 40% are used for model verification. Based on the highest efficiency coefficient and the lowest root mean square error, the best ANN model was chosen. The results of TGA test implied that the cement hydration is enhanced in the presence of the optimum percentage (0.3 wt%) of MWCNT. Moreover, since the efficiency coefficient of the modeling results of CH and PHP loss in both the calibration and verification stages was more than 0.96, it was concluded that the ANN could be used as an accurate tool for modeling the TGA results. Another finding of this study was that the ANN prediction in higher ages was more precise. PMID:24489487
Five regional scale models with a horizontal domain covering the European continent and its surrounding seas, two hemispheric and one global scale model participated in the atmospheric Hg modelling intercomparison study. The models were compared between each other and with availa...
Quadrupedal rodent gait compensations in a low dose monoiodoacetate model of osteoarthritis.
Lakes, Emily H; Allen, Kyle D
2018-06-01
Rodent gait analysis provides robust, quantitative results for preclinical musculoskeletal and neurological models. In prior work, surgical models of osteoarthritis have been found to result in a hind limb shuffle-stepping gait compensation, while a high dose monoiodoacetate (MIA, 3 mg) model resulted in a hind limb antalgic gait. However, it is unknown whether the antalgic gait caused by MIA is associated with severity of degeneration from the high dosage or the whole-joint degeneration associated with glycolysis inhibition. This study evaluates rodent gait changes resulting from a low dose, 1 mg unilateral intra-articular injection of MIA compared to saline injected and naïve rats. Spatiotemporal and dynamic gait parameters were collected from a total of 42 male Lewis rats spread across 3 time points: 1, 2, and 4 weeks post-injection. To provide a detailed analysis of this low dose MIA model, gait analysis was used to uniquely quantify both fore and hind limb gait parameters. Our data indicate that 1 mg of MIA caused relatively minor degeneration and a shuffle-step gait compensation, similar to the compensation observed in prior surgical models. These data from a 1 mg MIA model show a different gait compensation compared to a previously studied 3 mg model. This 1 mg MIA model resulted in gait compensations more similar to a previously studied surgical model of osteoarthritis. Additionally, this study provides detailed 4 limb analysis of rodent gait that includes spatiotemporal and dynamic data from the same gait trial. These data highlight the importance of measuring dynamic data in combination with spatiotemporal data, since compensatory gait patterns may not be captured by spatial, temporal, or dynamic characterizations alone. Copyright © 2018 Elsevier B.V. All rights reserved.
Kolls, Brad J; Lai, Amy H; Srinivas, Anang A; Reid, Robert R
2014-06-01
The purpose of this study was to determine the relative cost reductions within different staffing models for continuous video-electroencephalography (cvEEG) service by introducing a template system for 10/20 lead application. We compared six staffing models using decision tree modeling based on historical service line utilization data from the cvEEG service at our center. Templates were integrated into technologist-based service lines in six different ways. The six models studied were templates for all studies, templates for intensive care unit (ICU) studies, templates for on-call studies, templates for studies of ≤ 24-hour duration, technologists for on-call studies, and technologists for all studies. Cost was linearly related to the study volume for all models with the "templates for all" model incurring the lowest cost. The "technologists for all" model carried the greatest cost. Direct cost comparison shows that any introduction of templates results in cost savings, with the templates being used for patients located in the ICU being the second most cost efficient and the most practical of the combined models to implement. Cost difference between the highest and lowest cost models under the base case produced an annual estimated savings of $267,574. Implementation of the ICU template model at our institution under base case conditions would result in a $205,230 savings over our current "technologist for all" model. Any implementation of templates into a technologist-based cvEEG service line results in cost savings, with the most significant annual savings coming from using the templates for all studies, but the most practical implementation approach with the second highest cost reduction being the template used in the ICU. The lowered costs determined in this work suggest that a template-based cvEEG service could be supported at smaller centers with significantly reduced costs and could allow for broader use of cvEEG patient monitoring.
Yavari, Fatemeh; Mahdavi, Shirin; Towhidkhah, Farzad; Ahmadi-Pajouh, Mohammad-Ali; Ekhtiari, Hamed; Darainy, Mohammad
2016-04-01
Despite several pieces of evidence, which suggest that the human brain employs internal models for motor control and learning, the location of these models in the brain is not yet clear. In this study, we used transcranial direct current stimulation (tDCS) to manipulate right cerebellar function, while subjects adapt to a visuomotor task. We investigated the effect of this manipulation on the internal forward and inverse models by measuring two kinds of behavior: generalization of training in one direction to neighboring directions (as a proxy for inverse models) and localization of the hand position after movement without visual feedback (as a proxy for forward model). The experimental results showed no effect of cerebellar tDCS on generalization, but significant effect on localization. These observations support the idea that the cerebellum is a possible brain region for internal forward, but not inverse model formation. We also used a realistic human head model to calculate current density distribution in the brain. The result of this model confirmed the passage of current through the cerebellum. Moreover, to further explain some observed experimental results, we modeled the visuomotor adaptation process with the help of a biologically inspired method known as population coding. The effect of tDCS was also incorporated in the model. The results of this modeling study closely match our experimental data and provide further evidence in line with the idea that tDCS manipulates FM's function in the cerebellum.
Employing a Modified Diffuser Momentum Model to Simulate Ventilation of the Orion CEV
NASA Technical Reports Server (NTRS)
Straus, John; Lewis, John F.
2011-01-01
The Ansys CFX CFD modeling tool was used to support the design efforts of the ventilation system for the Orion CEV. CFD modeling was used to establish the flow field within the cabin for several supply configurations. A mesh and turbulence model sensitivity study was performed before the design studies. Results were post-processed for comparison with performance requirements. Most configurations employed straight vaned diffusers to direct and throw the flow. To manage the size of the models, the diffuser vanes were not resolved. Instead, a momentum model was employed to account for the effect of the diffusers. The momentum model was tested against a separate, vane-resolved side study. Results are presented for a single diffuser configuration for a low supply flow case.
Investigations for Supersonic Transports at Transonic and Supersonic Conditions
NASA Technical Reports Server (NTRS)
Rivers, S. Melissa B.; Owens, Lewis R.; Wahls, Richard A.
2007-01-01
Several computational studies were conducted as part of NASA s High Speed Research Program. Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-Speed Research program are given. The effects of grid topology and the representation of the actual wind tunnel model geometry are also investigated. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin-Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
Comparative recruitment dynamics of Alewife and Bloater in Lakes Michigan and Huron
Collingsworth, Paris D.; Bunnell, David B.; Madenjian, Charles P.; Riley, Stephen C.
2014-01-01
The predictive power of recruitment models often relies on the identification and quantification of external variables, in addition to stock size. In theory, the identification of climatic, biotic, or demographic influences on reproductive success assists fisheries management by identifying factors that have a direct and reproducible influence on the population dynamics of a target species. More often, models are constructed as one-time studies of a single population whose results are not revisited when further data become available. Here, we present results from stock recruitment models for Alewife Alosa pseudoharengus and Bloater Coregonus hoyi in Lakes Michigan and Huron. The factors that explain variation in Bloater recruitment were remarkably consistent across populations and with previous studies that found Bloater recruitment to be linked to population demographic patterns in Lake Michigan. Conversely, our models were poor predictors of Alewife recruitment in Lake Huron but did show some agreement with previously published models from Lake Michigan. Overall, our results suggest that external predictors of fish recruitment are difficult to discern using traditional fisheries models, and reproducing the results from previous studies may be difficult particularly at low population sizes.
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Assessment of CMIP5 historical simulations of rainfall over Southeast Asia
NASA Astrophysics Data System (ADS)
Raghavan, Srivatsan V.; Liu, Jiandong; Nguyen, Ngoc Son; Vu, Minh Tue; Liong, Shie-Yui
2018-05-01
We present preliminary analyses of the historical (1986-2005) climate simulations of a ten-member subset of the Coupled Model Inter-comparison Project Phase 5 (CMIP5) global climate models over Southeast Asia. The objective of this study was to evaluate the general circulation models' performance in simulating the mean state of climate over this less-studied climate vulnerable region, with a focus on precipitation. Results indicate that most of the models are unable to reproduce the observed state of climate over Southeast Asia. Though the multi-model ensemble mean is a better representation of the observations, the uncertainties in the individual models are far high. There is no particular model that performed well in simulating the historical climate of Southeast Asia. There seems to be no significant influence of the spatial resolutions of the models on the quality of simulation, despite the view that higher resolution models fare better. The study results emphasize on careful consideration of models for impact studies and the need to improve the next generation of models in their ability to simulate regional climates better.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Validation of numerical models for flow simulation in labyrinth seals
NASA Astrophysics Data System (ADS)
Frączek, D.; Wróblewski, W.
2016-10-01
CFD results were compared with the results of experiments for the flow through the labyrinth seal. RANS turbulence models (k-epsilon, k-omega, SST and SST-SAS) were selected for the study. Steady and transient results were analyzed. ANSYS CFX was used for numerical computation. The analysis included flow through sealing section with the honeycomb land. Leakage flows and velocity profiles in the seal were compared. In addition to the comparison of computational models, the divergence of modeling and experimental results has been determined. Tips for modeling these problems were formulated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-01
Transposition models are widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic (PV) panels. These transposition models have been developed using various assumptions about the distribution of the diffuse radiation, and most of the parameterizations in these models have been developed using hourly ground data sets. Numerous studies have compared the performance of transposition models, but this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty using high-resolution ground measurements in the plane of array. Our results suggest that the amount of aerosol optical depthmore » can affect the accuracy of isotropic models. The choice of empirical coefficients and the use of decomposition models can both result in uncertainty in the output from the transposition models. It is expected that the results of this study will ultimately lead to improvements of the parameterizations as well as the development of improved physical models.« less
An Exploratory Study of the Role of Human Resource Management in Models of Employee Turnover
ERIC Educational Resources Information Center
Ozolina-Ozola, Iveta
2016-01-01
The purpose of this paper is to present the study results of the human resource management role in the voluntary employee turnover models. The mixed methods design was applied. On the basis of the results of the search and evaluation of publications, the 16 models of employee turnover were selected. Applying the method of content analysis, the…
Lorjaroenphon, Yaowapa; Cadwallader, Keith R
2015-01-28
Thirty aroma-active components of a cola-flavored carbonated beverage were quantitated by stable isotope dilution assays, and their odor activity values (OAVs) were calculated. The OAV results revealed that 1,8-cineole, (R)-(-)-linalool, and octanal made the greatest contribution to the overall aroma of the cola. A cola aroma reconstitution model was constructed by adding 20 high-purity standards to an aqueous sucrose-phosphoric acid solution. The results of headspace solid-phase microextraction and sensory analyses were used to adjust the model to better match authentic cola. The rebalanced model was used as a complete model for the omission study. Sensory results indicated that omission of a group consisting of methyleugenol, (E)-cinnamaldehyde, eugenol, and (Z)- and (E)-isoeugenols differed from the complete model, while omission of the individual components of this group did not differ from the complete model. These results indicate that a balance of numerous odorants is responsible for the characteristic aroma of cola-flavored carbonated beverages.
Eating disorders among fashion models: a systematic review of the literature.
Zancu, Simona Alexandra; Enea, Violeta
2017-09-01
In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.
Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure. Parts 1 and 2
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Defining the electromagnetic environment inside a graphite composite fairing due to lightning is of interest to spacecraft developers. This paper is the first in a two part series and studies the shielding effectiveness of a graphite composite model fairing using derived equivalent properties. A frequency domain Method of Moments (MoM) model is developed and comparisons are made with shielding test results obtained using a vehicle-like composite fairing. The comparison results show that the analytical models can adequately predict the test results. Both measured and model data indicate that graphite composite fairings provide significant attenuation to magnetic fields as frequency increases. Diffusion effects are also discussed. Part 2 examines the time domain based effects through the development of a loop based induced field testing and a Transmission-Line-Matrix (TLM) model is developed in the time domain to study how the composite fairing affects lightning induced magnetic fields. Comparisons are made with shielding test results obtained using a vehicle-like composite fairing in the time domain. The comparison results show that the analytical models can adequately predict the test and industry results.
Turbulence Model Comparisons for Supersonic Transports at Transonic and Supersonic Conditions
NASA Technical Reports Server (NTRS)
Rivers, S. M. B.; Wahls, R. A.
2003-01-01
Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-speed Research program are given. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin- Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.
On the application of the PFEM to droplet dynamics modeling in fuel cells
NASA Astrophysics Data System (ADS)
Ryzhakov, Pavel B.; Jarauta, Alex; Secanell, Marc; Pons-Prats, Jordi
2017-07-01
The Particle Finite Element Method (PFEM) is used to develop a model to study two-phase flow in fuel cell gas channels. First, the PFEM is used to develop the model of free and sessile droplets. The droplet model is then coupled to an Eulerian, fixed-grid, model for the airflow. The resulting coupled PFEM-Eulerian algorithm is used to study droplet oscillations in an air flow and droplet growth in a low-temperature fuel cell gas channel. Numerical results show good agreement with predicted frequencies of oscillation, contact angle, and deformation of injected droplets in gas channels. The PFEM-based approach provides a novel strategy to study droplet dynamics in fuel cells.
Future-year ozone prediction for the United States using updated models and inputs.
Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun
2017-08-01
The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.
2011-01-01
Background Valve dysfunction is a common cardiovascular pathology. Despite significant clinical research, there is little formal study of how valve dysfunction affects overall circulatory dynamics. Validated models would offer the ability to better understand these dynamics and thus optimize diagnosis, as well as surgical and other interventions. Methods A cardiovascular and circulatory system (CVS) model has already been validated in silico, and in several animal model studies. It accounts for valve dynamics using Heaviside functions to simulate a physiologically accurate "open on pressure, close on flow" law. However, it does not consider real-time valve opening dynamics and therefore does not fully capture valve dysfunction, particularly where the dysfunction involves partial closure. This research describes an updated version of this previous closed-loop CVS model that includes the progressive opening of the mitral valve, and is defined over the full cardiac cycle. Results Simulations of the cardiovascular system with healthy mitral valve are performed, and, the global hemodynamic behaviour is studied compared with previously validated results. The error between resulting pressure-volume (PV) loops of already validated CVS model and the new CVS model that includes the progressive opening of the mitral valve is assessed and remains within typical measurement error and variability. Simulations of ischemic mitral insufficiency are also performed. Pressure-Volume loops, transmitral flow evolution and mitral valve aperture area evolution follow reported measurements in shape, amplitude and trends. Conclusions The resulting cardiovascular system model including mitral valve dynamics provides a foundation for clinical validation and the study of valvular dysfunction in vivo. The overall models and results could readily be generalised to other cardiac valves. PMID:21942971
NASA Astrophysics Data System (ADS)
Yu, Sen; Lu, Hongwei
2018-04-01
Under the effects of global change, water crisis ranks as the top global risk in the future decade, and water conflict in transboundary river basins as well as the geostrategic competition led by it is most concerned. This study presents an innovative integrated PPMGWO model of water resources optimization allocation in a transboundary river basin, which is integrated through the projection pursuit model (PPM) and Grey wolf optimization (GWO) method. This study uses the Songhua River basin and 25 control units as examples, adopting the PPMGWO model proposed in this study to allocate the water quantity. Using water consumption in all control units in the Songhua River basin in 2015 as reference to compare with optimization allocation results of firefly algorithm (FA) and Particle Swarm Optimization (PSO) algorithms as well as the PPMGWO model, results indicate that the average difference between corresponding allocation results and reference values are 0.195 bil m3, 0.151 bil m3, and 0.085 bil m3, respectively. Obviously, the average difference of the PPMGWO model is the lowest and its optimization allocation result is closer to reality, which further confirms the reasonability, feasibility, and accuracy of the PPMGWO model. And then the PPMGWO model is adopted to simulate allocation of available water quantity in Songhua River basin in 2018, 2020, and 2030. The simulation results show water quantity which could be allocated in all controls demonstrates an overall increasing trend with reasonable and equal exploitation and utilization of water resources in the Songhua River basin in future. In addition, this study has a certain reference value and application meaning to comprehensive management and water resources allocation in other transboundary river basins.
NASA Astrophysics Data System (ADS)
Kumar, Awkash; Patil, Rashmi S.; Dikshit, Anil Kumar; Kumar, Rakesh; Brandt, Jørgen; Hertel, Ole
2016-10-01
The accuracy of the results from an air quality model is governed by the quality of emission and meteorological data inputs in most of the cases. In the present study, two air quality models were applied for inverse modelling to determine the particulate matter emission strengths of urban and regional sources in and around Mumbai in India. The study takes outset in an existing emission inventory for Total Suspended Particulate Matter (TSPM). Since it is known that the available TSPM inventory is uncertain and incomplete, this study will aim for qualifying this inventory through an inverse modelling exercise. For use as input to the air quality models in this study, onsite meteorological data has been generated using the Weather Research Forecasting (WRF) model. The regional background concentration from regional sources is transported in the atmosphere from outside of the study domain. The regional background concentrations of particulate matter were obtained from model calculations with the Danish Eulerian Hemisphere Model (DEHM) for regional sources. The regional background concentrations obtained from DEHM were then used as boundary concentrations in AERMOD calculations of the contribution from local urban sources. The results from the AERMOD calculations were subsequently compared with observed concentrations and emission correction factors obtained by best fit of the model results to the observed concentrations. The study showed that emissions had to be up-scaled by between 14 and 55% in order to fit the observed concentrations; this is of course when assuming that the DEHM model describes the background concentration level of the right magnitude.
NASA Technical Reports Server (NTRS)
1973-01-01
A condensed summary of the traffic analyses and systems requirements for the new traffic model is presented. The results of each study activity are explained, key analyses are described, and important results are highlighted.
Numerical study of a separating and reattaching flow by using Reynolds-stress tubulence closure
NASA Technical Reports Server (NTRS)
Amano, R. S.; Goel, P.
1983-01-01
The numerical study of the Reynolds-stress turbulence closure for separating, reattaching, recirculating and redeveloping flow is summarized. The calculations were made for two different closure models of pressure - strain correlation. The results were compared with the experimental data. Furthermore, these results were compared with the computations made by using the one layer and three layer treatment of k-epsilon turbulence model which were developed. Generally the computations by the Reynolds-stress model show better results than those by the k-epsilon model, in particular, some improvement was noticed in the redeveloping region of the separating and reattaching flow in a pipe with sudden expansion.
Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.
Pink, Joshua; Parker, Ben; Petrou, Stavros
2016-09-01
A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters. Studies should consistently report not only all simplifying assumptions made but also the estimated impact of these assumptions on the cost-effectiveness results.
Gottfredson, Nisha C.; Bauer, Daniel J.; Baldwin, Scott A.; Okiishi, John C.
2014-01-01
Objective This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missing that are due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and will produce biased results if this assumption is incorrect. Method We use longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and a shared parameter mixture model (SPMM). Results In this dataset, estimates of the rate of improvement during therapy differ by 6.50 – 6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. Conclusion We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PMID:24274626
Dynamic response tests of inertial and optical wind-tunnel model attitude measurement devices
NASA Technical Reports Server (NTRS)
Buehrle, R. D.; Young, C. P., Jr.; Burner, A. W.; Tripp, J. S.; Tcheng, P.; Finley, T. D.; Popernack, T. G., Jr.
1995-01-01
Results are presented for an experimental study of the response of inertial and optical wind-tunnel model attitude measurement systems in a wind-off simulated dynamic environment. This study is part of an ongoing activity at the NASA Langley Research Center to develop high accuracy, advanced model attitude measurement systems that can be used in a dynamic wind-tunnel environment. This activity was prompted by the inertial model attitude sensor response observed during high levels of model vibration which results in a model attitude measurement bias error. Significant bias errors in model attitude measurement were found for the measurement using the inertial device during wind-off dynamic testing of a model system. The amount of bias present during wind-tunnel tests will depend on the amplitudes of the model dynamic response and the modal characteristics of the model system. Correction models are presented that predict the vibration-induced bias errors to a high degree of accuracy for the vibration modes characterized in the simulated dynamic environment. The optical system results were uncorrupted by model vibration in the laboratory setup.
Assessment of the quality of reporting observational studies in the pediatric dental literature.
Butani, Yogita; Hartz, Arthur; Levy, Steven; Watkins, Catherine; Kanellis, Michael; Nowak, Arthur
2006-01-01
The purpose of this assessment was to evaluate reporting of observational studies in the pediatric dental literature. This assessment included the following steps: (1) developing a model for reporting information in clinical dentistry studies; (2) identifying treatment comparisons in pediatric dentistry that were evaluated by at least 5 observational studies; (3) abstracting from these studies any data indicated by applying the reporting model; and (4) comparing available data elements to the desired data elements in the reporting model. The reporting model included data elements related to: (1) patients; (2) providers; (3) treatment details; and (4) study design. Two treatment comparisons in pediatric dentistry were identified with 5 or more observational studies: (1) stainless steel crowns vs amalgams (10 studies); and (2) composite restorations vs amalgam (5 studies). Results from studies comparing the same treatments varied substantially. Data elements from the reporting model that could have explained some of the variation were often reported inadequately or not at all. Reporting of observational studies in the pediatric dental literature may be inadequate for an informed interpretation of the results. Models similar to that used in this study could be used for developing standards for the conduct and reporting of observational studies in pediatric dentistry.
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.
2014-08-01
For the first time, a decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution and daily time resolution has been conducted in California to provide air quality data for health effects studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, EC, OC, nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95%, 100%, 71%, 73%, and 92% of the simulated months, respectively. The base dataset provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated one day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to over-predict wind speed during stagnation events, leading to under-predictions of high PM concentrations, usually in winter months. The WRF model also generally under-predicts relative humidity, resulting in less particulate nitrate formation especially during winter months. These issues will be improved in future studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/.
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.
2015-03-01
For the first time, a ~ decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution over populated regions and daily time resolution has been conducted for California to provide air quality data for health effect studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, elemental carbon (EC), organic carbon (OC), nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95, 100, 71, 73, and 92% of the simulated months, respectively. The base data set provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated 1 day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to overpredict wind speed during stagnation events, leading to underpredictions of high PM concentrations, usually in winter months. The WRF model also generally underpredicts relative humidity, resulting in less particulate nitrate formation, especially during winter months. These limitations must be recognized when using data in health studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/ .
Lamont, Andrea E.; Vermunt, Jeroen K.; Van Horn, M. Lee
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we test the effects of violating an implicit assumption often made in these models – i.e., independent variables in the model are not directly related to latent classes. Results indicated that the major risk of failing to model the relationship between predictor and latent class was an increase in the probability of selecting additional latent classes and biased class proportions. Additionally, this study tests whether regression mixture models can detect a piecewise relationship between a predictor and outcome. Results suggest that these models are able to detect piecewise relations, but only when the relationship between the latent class and the predictor is included in model estimation. We illustrate the implications of making this assumption through a re-analysis of applied data examining heterogeneity in the effects of family resources on academic achievement. We compare previous results (which assumed no relation between independent variables and latent class) to the model where this assumption is lifted. Implications and analytic suggestions for conducting regression mixture based on these findings are noted. PMID:26881956
Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion
NASA Technical Reports Server (NTRS)
Demchak, L.; Harcrow, H.
1976-01-01
The results of a study to analyze data and document dynamic program highlights of the Skylab Program are presented. Included are structural model sources, illustration of the analytical models, utilization of models and the resultant derived data, data supplied to organization and subsequent utilization, and specifications of model cycles.
Lu, Dan; Ye, Ming; Curtis, Gary P.
2015-08-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. Our study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict themore » reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. Moreover, these reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Finally, limitations of applying MLBMA to the synthetic study and future real-world modeling are discussed.« less
Prediction equations of forced oscillation technique: the insidious role of collinearity.
Narchi, Hassib; AlBlooshi, Afaf
2018-03-27
Many studies have reported reference data for forced oscillation technique (FOT) in healthy children. The prediction equation of FOT parameters were derived from a multivariable regression model examining the effect of age, gender, weight and height on each parameter. As many of these variables are likely to be correlated, collinearity might have affected the accuracy of the model, potentially resulting in misleading, erroneous or difficult to interpret conclusions.The aim of this work was: To review all FOT publications in children since 2005 to analyze whether collinearity was considered in the construction of the published prediction equations. Then to compare these prediction equations with our own study. And to analyse, in our study, how collinearity between the explanatory variables might affect the predicted equations if it was not considered in the model. The results showed that none of the ten reviewed studies had stated whether collinearity was checked for. Half of the reports had also included in their equations variables which are physiologically correlated, such as age, weight and height. The predicted resistance varied by up to 28% amongst these studies. And in our study, multicollinearity was identified between the explanatory variables initially considered for the regression model (age, weight and height). Ignoring it would have resulted in inaccuracies in the coefficients of the equation, their signs (positive or negative), their 95% confidence intervals, their significance level and the model goodness of fit. In Conclusion with inaccurately constructed and improperly reported models, understanding the results and reproducing the models for future research might be compromised.
ERIC Educational Resources Information Center
Lu, Yi
2016-01-01
To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…
NASA Astrophysics Data System (ADS)
Jang, Yujin; Huh, Jinbum; Lee, Namhun; Lee, Seungsoo; Park, Youngmin
2018-04-01
The RANS equations are widely used to analyze complex flows over aircraft. The equations require a turbulence model for turbulent flow analyses. A suitable turbulence must be selected for accurate predictions of aircraft aerodynamic characteristics. In this study, numerical analyses of three-dimensional aircraft are performed to compare the results of various turbulence models for the prediction of aircraft aerodynamic characteristics. A 3-D RANS solver, MSAPv, is used for the aerodynamic analysis. The four turbulence models compared are the Sparlart-Allmaras (SA) model, Coakley's q-ω model, Huang and Coakley's k-ɛ model, and Menter's k-ω SST model. Four aircrafts are considered: an ARA-M100, DLR-F6 wing-body, DLR-F6 wing-body-nacelle-pylon from the second drag prediction workshop, and a high wing aircraft with nacelles. The CFD results are compared with experimental data and other published computational results. The details of separation patterns, shock positions, and Cp distributions are discussed to find the characteristics of the turbulence models.
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-01-01
In a number of environmental studies, relationships between natural processes are often assessed through regression analyses, using time series data. Such data are often multi-scale and non-stationary, leading to a poor accuracy of the resulting regression models and therefore to results with moderate reliability. To deal with this issue, the present paper introduces the EMD-regression methodology consisting in applying the empirical mode decomposition (EMD) algorithm on data series and then using the resulting components in regression models. The proposed methodology presents a number of advantages. First, it accounts of the issues of non-stationarity associated to the data series. Second, this approach acts as a scan for the relationship between a response variable and the predictors at different time scales, providing new insights about this relationship. To illustrate the proposed methodology it is applied to study the relationship between weather and cardiovascular mortality in Montreal, Canada. The results shed new knowledge concerning the studied relationship. For instance, they show that the humidity can cause excess mortality at the monthly time scale, which is a scale not visible in classical models. A comparison is also conducted with state of the art methods which are the generalized additive models and distributed lag models, both widely used in weather-related health studies. The comparison shows that EMD-regression achieves better prediction performances and provides more details than classical models concerning the relationship.
Gilmartin, Heather M; Sousa, Karen H; Battaglia, Catherine
2016-01-01
The central line (CL) bundle interventions are important for preventing central line-associated bloodstream infections (CLABSIs), but a modeling method for testing the CL bundle interventions within a health systems framework is lacking. Guided by the Quality Health Outcomes Model (QHOM), this study tested the CL bundle interventions in reflective and composite, latent, variable measurement models to assess the impact of the modeling approaches on an investigation of the relationships between adherence to the CL bundle interventions, organizational context, and CLABSIs. A secondary data analysis study was conducted using data from 614 U.S. hospitals that participated in the Prevention of Nosocomial Infection and Cost-Effectiveness Refined study. The sample was randomly split into exploration and validation subsets. The two CL bundle modeling approaches resulted in adequate fitting structural models (RMSEA = .04; CFI = .94) and supported similar relationships within the QHOM. Adherence to the CL bundle had a direct effect on organizational context (reflective = .23; composite = .20; p = .01) and CLABSIs (reflective = -.28; composite = -.25; p = .01). The relationship between context and CLABSIs was not significant. Both modeling methods resulted in partial support of the QHOM. There were little statistical, but large, conceptual differences between the reflective and composite modeling approaches. The empirical impact of the modeling approaches was inconclusive, for both models resulted in a good fit to the data. Lessons learned are presented. The comparison of modeling approaches is recommended when initially modeling variables that have never been modeled or with directional ambiguity to increase transparency and bring confidence to study findings.
NASA Technical Reports Server (NTRS)
Anderson, G. S.; Hayden, R. E.; Thompson, A. R.; Madden, R.
1985-01-01
The feasibility of acoustical scale modeling techniques for modeling wind effects on long range, low frequency outdoor sound propagation was evaluated. Upwind and downwind propagation was studied in 1/100 scale for flat ground and simple hills with both rigid and finite ground impedance over a full scale frequency range from 20 to 500 Hz. Results are presented as 1/3-octave frequency spectra of differences in propagation loss between the case studied and a free-field condition. Selected sets of these results were compared with validated analytical models for propagation loss, when such models were available. When they were not, results were compared with predictions from approximate models developed. Comparisons were encouraging in many cases considering the approximations involved in both the physical modeling and analysis methods. Of particular importance was the favorable comparison between theory and experiment for propagation over soft ground.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marketin, Tomislav, E-mail: marketin@phy.hr; Petković, Jelena; Paar, Nils
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied inmore » more than 5000 neutron-rich nuclei. Aside from the astrophysical applications, the results of this calculation can also be employed in the modeling of the electron and antineutrino spectra from nuclear reactors.« less
Beta decay rates of neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Marketin, Tomislav; Huther, Lutz; Petković, Jelena; Paar, Nils; Martínez-Pinedo, Gabriel
2016-06-01
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei. Aside from the astrophysical applications, the results of this calculation can also be employed in the modeling of the electron and antineutrino spectra from nuclear reactors.
Preliminary Hydrodynamic Model Tests of Several LVA Planing Hull Concepts
1975-10-01
OF CONTENTS Page ABSTRACT ii INTRODUCTION 1 MODELS AND APPARATUS 2 TEST PROCEDURE 4 TEST RESULTS 6 Smooth Water Results 6 Rough Water...07’«6, Project NR 062-510. Technical monitoring was provided by the LVA office at NSRDC. R-18A0 - 2 - MODELS AND APPARATUS The test models were 1 /12...which have not been considered in this study. Tables 1 , 2 and 3 and Figures *», 5 and 6 represent results for the inverted vee-bottom (model P-l
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
NASA Technical Reports Server (NTRS)
Brooks, George W.
1985-01-01
The options for the design, construction, and testing of a dynamic model of the space station were evaluated. Since the definition of the space station structure is still evolving, the Initial Operating Capacity (IOC) reference configuration was used as the general guideline. The results of the studies treat: general considerations of the need for and use of a dynamic model; factors which deal with the model design and construction; and a proposed system for supporting the dynamic model in the planned Large Spacecraft Laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thornton, Peter E; Wang, Weile; Law, Beverly E.
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less
Geospace environment modeling 2008--2009 challenge: Dst index
Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.
2013-01-01
This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Hostetler, S.W.; Giorgi, F.
1993-01-01
In this paper we investigate the feasibility of coupling regional climate models (RCMs) with landscape-scale hydrologic models (LSHMs) for studies of the effects of climate on hydrologic systems. The RCM used is the National Center for Atmospheric Research/Pennsylvania State University mesoscale model (MM4). Output from two year-round simulations (1983 and 1988) over the western United States is used to drive a lake model for Pyramid Lake in Nevada and a streamfiow model for Steamboat Creek in Oregon. Comparisons with observed data indicate that MM4 is able to produce meteorologic data sets that can be used to drive hydrologic models. Results from the lake model simulations indicate that the use of MM4 output produces reasonably good predictions of surface temperature and evaporation. Results from the streamflow simulations indicate that the use of MM4 output results in good simulations of the seasonal cycle of streamflow, but deficiencies in simulated wintertime precipitation resulted in underestimates of streamflow and soil moisture. Further work with climate (multiyear) simulations is necessary to achieve a complete analysis, but the results from this study indicate that coupling of LSHMs and RCMs may be a useful approach for evaluating the effects of climate change on hydrologic systems.
Szekeres Swiss-cheese model and supernova observations
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof; Célérier, Marie-Noëlle
2010-11-01
We use different particular classes of axially symmetric Szekeres Swiss-cheese models for the study of the apparent dimming of the supernovae of type Ia. We compare the results with those obtained in the corresponding Lemaître-Tolman Swiss-cheese models. Although the quantitative picture is different the qualitative results are comparable, i.e., one cannot fully explain the dimming of the supernovae using small-scale (˜50Mpc) inhomogeneities. To fit successfully the data we need structures of order of 500 Mpc size or larger. However, this result might be an artifact due to the use of axial light rays in axially symmetric models. Anyhow, this work is a first step in trying to use Szekeres Swiss-cheese models in cosmology and it will be followed by the study of more physical models with still less symmetry.
Thermo-hydroforming of a fiber-reinforced thermoplastic composites considering fiber orientations
NASA Astrophysics Data System (ADS)
Ahn, Hyunchul; Kuuttila, Nicholas Eric; Pourboghrat, Farhang
2018-05-01
The Thermoplastic woven composites were formed using a composite thermal hydroforming process, utilizing heated and pressurized fluid, similar to sheet metal forming. This study focuses on the modification of 300-ton pressure formation and predicts its behavior. Spectra Shield SR-3136 is used in this study and material properties are measured by experiments. The behavior of fiber-reinforced thermoplastic polymer composites (FRTP) was modeled using the Preferred Fiber Orientation (PFO) model and validated by comparing numerical analysis with experimental results. The thermo-hydroforming process has shown good results in the ability to form deep drawn parts with reduced wrinkles. Numerical analysis was performed using the PFO model and implemented as commercial finite element software ABAQUS / Explicit. The user subroutine (VUMAT) was used for the material properties of the thermoplastic composite layer. This model is suitable for working with multiple layers of composite laminates. Model parameters have been updated to work with cohesive zone model to calculate the interfacial properties between each composite layer. The results of the numerical modeling showed a good correlation with the molding experiment on the forming shape. Numerical results were also compared with experimental results on punch force-displacement curves for deformed geometry and forming processes of the composite layer. Overall, the shape of the deformed FRTP, including the distribution of wrinkles, was accurately predicted as shown in this study.
NASA Astrophysics Data System (ADS)
Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia
2018-06-01
Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.
Transition mixing study empirical model report
NASA Technical Reports Server (NTRS)
Srinivasan, R.; White, C.
1988-01-01
The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.
Capelli, Claudio; Biglino, Giovanni; Petrini, Lorenza; Migliavacca, Francesco; Cosentino, Daria; Bonhoeffer, Philipp; Taylor, Andrew M; Schievano, Silvia
2012-12-01
Finite element (FE) modelling can be a very resourceful tool in the field of cardiovascular devices. To ensure result reliability, FE models must be validated experimentally against physical data. Their clinical application (e.g., patients' suitability, morphological evaluation) also requires fast simulation process and access to results, while engineering applications need highly accurate results. This study shows how FE models with different mesh discretisations can suit clinical and engineering requirements for studying a novel device designed for percutaneous valve implantation. Following sensitivity analysis and experimental characterisation of the materials, the stent-graft was first studied in a simplified geometry (i.e., compliant cylinder) and validated against in vitro data, and then in a patient-specific implantation site (i.e., distensible right ventricular outflow tract). Different meshing strategies using solid, beam and shell elements were tested. Results showed excellent agreement between computational and experimental data in the simplified implantation site. Beam elements were found to be convenient for clinical applications, providing reliable results in less than one hour in a patient-specific anatomical model. Solid elements remain the FE choice for engineering applications, albeit more computationally expensive (>100 times). This work also showed how information on device mechanical behaviour differs when acquired in a simplified model as opposed to a patient-specific model.
Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia
2016-01-01
Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632
Numerical study of single and two interacting turbulent plumes in atmospheric cross flow
NASA Astrophysics Data System (ADS)
Mokhtarzadeh-Dehghan, M. R.; König, C. S.; Robins, A. G.
The paper presents a numerical study of two interacting full-scale dry plumes issued into neutral boundary layer cross flow. The study simulates plumes from a mechanical draught cooling tower. The plumes are placed in tandem or side-by-side. Results are first presented for plumes with a density ratio of 0.74 and plume-to-crosswind speed ratio of 2.33, for which data from a small-scale wind tunnel experiment were available and were used to assess the accuracy of the numerical results. Further results are then presented for the more physically realistic density ratio of 0.95, maintaining the same speed ratio. The sensitivity of the results with respect to three turbulence models, namely, the standard k- ɛ model, the RNG k- ɛ model and the Differential Flux Model (DFM) is presented. Comparisons are also made between the predicted rise height and the values obtained from existing integral models. The formation of two counter-rotating vortices is well predicted. The results show good agreement for the rise height predicted by different turbulence models, but the DFM predicts temperature profiles more accurately. The values of predicted rise height are also in general agreement. However, discrepancies between the present results for the rise height for single and multiple plumes and the values obtained from known analytical relations are apparent and possible reasons for these are discussed.
Environmental Concern and Sociodemographic Variables: A Study of Statistical Models
ERIC Educational Resources Information Center
Xiao, Chenyang; McCright, Aaron M.
2007-01-01
Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…
Hens, Niel; Habteab Ghebretinsae, Aklilu; Hardt, Karin; Van Damme, Pierre; Van Herck, Koen
2014-03-14
In this paper, we review the results of existing statistical models of the long-term persistence of hepatitis A vaccine-induced antibodies in light of recently available immunogenicity data from 2 clinical trials (up to 17 years of follow-up). Healthy adult volunteers monitored annually for 17 years after the administration of the first vaccine dose in 2 double-blind, randomized clinical trials were included in this analysis. Vaccination in these studies was administered according to a 2-dose vaccination schedule: 0, 12 months in study A and 0, 6 months in study B (NCT00289757/NCT00291876). Antibodies were measured using an in-house ELISA during the first 11 years of follow-up; a commercially available ELISA was then used up to Year 17 of follow-up. Long-term antibody persistence from studies A and B was estimated using statistical models for longitudinal data. Data from studies A and B were modeled separately. A total of 173 participants in study A and 108 participants in study B were included in the analysis. A linear mixed model with 2 changepoints allowed all available results to be accounted for. Predictions based on this model indicated that 98% (95%CI: 94-100%) of participants in study A and 97% (95%CI: 94-100%) of participants in study B will remain seropositive 25 years after receiving the first vaccine dose. Other models using part of the data provided consistent results: ≥95% of the participants was projected to remain seropositive for ≥25 years. This analysis, using previously used and newly selected model structures, was consistent with former estimates of seropositivity rates ≥95% for at least 25 years. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Visser, Philip W.; Kooi, Henk; Stuyfzand, Pieter J.
2015-05-01
Results are presented of a comprehensive thermal impact study on an aquifer thermal energy storage (ATES) system in Bilthoven, the Netherlands. The study involved monitoring of the thermal impact and modeling of the three-dimensional temperature evolution of the storage aquifer and over- and underlying units. Special attention was paid to non-uniformity of the background temperature, which varies laterally and vertically in the aquifer. Two models were applied with different levels of detail regarding initial conditions and heterogeneity of hydraulic and thermal properties: a fine-scale heterogeneity model which construed the lateral and vertical temperature distribution more realistically, and a simplified model which represented the aquifer system with only a limited number of homogeneous layers. Fine-scale heterogeneity was shown to be important to accurately model the ATES-impacted vertical temperature distribution and the maximum and minimum temperatures in the storage aquifer, and the spatial extent of the thermal plumes. The fine-scale heterogeneity model resulted in larger thermally impacted areas and larger temperature anomalies than the simplified model. The models showed that scattered and scarce monitoring data of ATES-induced temperatures can be interpreted in a useful way by groundwater and heat transport modeling, resulting in a realistic assessment of the thermal impact.
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.
2004-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.
Compston, Juliet E.; Chapurlat, Roland D.; Pfeilschifter, Johannes; Cooper, Cyrus; Hosmer, David W.; Adachi, Jonathan D.; Anderson, Frederick A.; Díez-Pérez, Adolfo; Greenspan, Susan L.; Netelenbos, J. Coen; Nieves, Jeri W.; Rossini, Maurizio; Watts, Nelson B.; Hooven, Frederick H.; LaCroix, Andrea Z.; March, Lyn; Roux, Christian; Saag, Kenneth G.; Siris, Ethel S.; Silverman, Stuart; Gehlbach, Stephen H.
2014-01-01
Context: Several fracture prediction models that combine fractures at different sites into a composite outcome are in current use. However, to the extent individual fracture sites have differing risk factor profiles, model discrimination is impaired. Objective: The objective of the study was to improve model discrimination by developing a 5-year composite fracture prediction model for fracture sites that display similar risk profiles. Design: This was a prospective, observational cohort study. Setting: The study was conducted at primary care practices in 10 countries. Patients: Women aged 55 years or older participated in the study. Intervention: Self-administered questionnaires collected data on patient characteristics, fracture risk factors, and previous fractures. Main Outcome Measure: The main outcome is time to first clinical fracture of hip, pelvis, upper leg, clavicle, or spine, each of which exhibits a strong association with advanced age. Results: Of four composite fracture models considered, model discrimination (c index) is highest for an age-related fracture model (c index of 0.75, 47 066 women), and lowest for Fracture Risk Assessment Tool (FRAX) major fracture and a 10-site model (c indices of 0.67 and 0.65). The unadjusted increase in fracture risk for an additional 10 years of age ranges from 80% to 180% for the individual bones in the age-associated model. Five other fracture sites not considered for the age-associated model (upper arm/shoulder, rib, wrist, lower leg, and ankle) have age associations for an additional 10 years of age from a 10% decrease to a 60% increase. Conclusions: After examining results for 10 different bone fracture sites, advanced age appeared the single best possibility for uniting several different sites, resulting in an empirically based composite fracture risk model. PMID:24423345
Highly Variable Cycle Exhaust Model Test (HVC10)
NASA Technical Reports Server (NTRS)
Henderson, Brenda; Wernet, Mark; Podboy, Gary; Bozak, Rick
2010-01-01
Results from acoustic and flow-field studies using the Highly Variable Cycle Exhaust (HVC) model were presented. The model consisted of a lobed mixer on the core stream, an elliptic nozzle on the fan stream, and an ejector. For baseline comparisons, the fan nozzle was replaced with a round nozzle and the ejector doors were removed from the model. Acoustic studies showed far-field noise levels were higher for the HVC model with the ejector than for the baseline configuration. Results from Particle Image Velocimetry (PIV) studies indicated that large flow separation regions occurred along the ejector doors, thus restricting flow through the ejector. Phased array measurements showed noise sources located near the ejector doors for operating conditions where tones were present in the acoustic spectra.
Recent transonic unsteady pressure measurements at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Sandford, M. C.; Ricketts, R. H.; Hess, R. W.
1985-01-01
Four semispan wing model configurations were studied in the Transonic Dynamics Tunnel (TDT). The first model had a clipped delta planform with a circular arc airfoil, the second model had a high aspect ratio planform with a supercritical airfoil, the third model has a rectangular planform with a supercritical airfoil and the fourth model had a high aspect ratio planform with a supercritical airfoil. To generate unsteady flow, the first and third models were equipped with pitch oscillation mechanisms and the first, second and fourth models were equipped with control surface oscillation mechanisms. The fourth model was similar in planform and airfoil shape to the second model, but it is the only one of the four models that has an elastic wing structure. The unsteady pressure studies of the four models are described and some typical results for each model are presented. Comparison of selected experimental data with analytical results also are included.
Residential Saudi load forecasting using analytical model and Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Al-Harbi, Ahmad Abdulaziz
In recent years, load forecasting has become one of the main fields of study and research. Short Term Load Forecasting (STLF) is an important part of electrical power system operation and planning. This work investigates the applicability of different approaches; Artificial Neural Networks (ANNs) and hybrid analytical models to forecast residential load in Kingdom of Saudi Arabia (KSA). These two techniques are based on model human modes behavior formulation. These human modes represent social, religious, official occasions and environmental parameters impact. The analysis is carried out on residential areas for three regions in two countries exposed to distinct people activities and weather conditions. The collected data are for Al-Khubar and Yanbu industrial city in KSA, in addition to Seattle, USA to show the validity of the proposed models applied on residential load. For each region, two models are proposed. First model is next hour load forecasting while second model is next day load forecasting. Both models are analyzed using the two techniques. The obtained results for ANN next hour models yield very accurate results for all areas while relatively reasonable results are achieved when using hybrid analytical model. For next day load forecasting, the two approaches yield satisfactory results. Comparative studies were conducted to prove the effectiveness of the models proposed.
Space-for-Time Substitution Works in Everglades Ecological Forecasting Models
Banet, Amanda I.; Trexler, Joel C.
2013-01-01
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable. PMID:24278368
Urban Expansion Modeling Approach Based on Multi-Agent System and Cellular Automata
NASA Astrophysics Data System (ADS)
Zeng, Y. N.; Yu, M. M.; Li, S. N.
2018-04-01
Urban expansion is a land-use change process that transforms non-urban land into urban land. This process results in the loss of natural vegetation and increase in impervious surfaces. Urban expansion also alters the hydrologic cycling, atmospheric circulation, and nutrient cycling processes and generates enormous environmental and social impacts. Urban expansion monitoring and modeling are crucial to understanding urban expansion process, mechanism, and its environmental impacts, and predicting urban expansion in future scenarios. Therefore, it is important to study urban expansion monitoring and modeling approaches. We proposed to simulate urban expansion by combining CA and MAS model. The proposed urban expansion model based on MSA and CA was applied to a case study area of Changsha-Zhuzhou-Xiangtan urban agglomeration, China. The results show that this model can capture urban expansion with good adaptability. The Kappa coefficient of the simulation results is 0.75, which indicated that the combination of MAS and CA offered the better simulation result.
NASA Astrophysics Data System (ADS)
Miyazaki, Yusuke; Tachiya, Hiroshi; Anata, Kenji; Hojo, Akihiro
This study discusses a head injury mechanism in case of a human head subjected to impact, from results of impact experiments by using a physical model of a human head with high-shape fidelity. The physical model was constructed by using rapid prototyping technology from the three-dimensional CAD data, which obtained from CT/MRI images of a subject's head. As results of the experiments, positive pressure responses occurred at the impacted site, whereas negative pressure responses occurred at opposite the impacted site. Moreover, the absolute maximum value of pressure occurring at the frontal region of the intracranial space of the head model resulted in same or higher than that at the occipital site in each case that the impact force was imposed on frontal or occipital region. This result has not been showed in other study using simple shape physical models. And, the result corresponds with clinical evidences that brain contusion mainly occurs at the frontal part in each impact direction. Thus, physical model with accurate skull shape is needed to clarify the mechanism of brain contusion.
A new model in achieving Green Accounting at hotels in Bali
NASA Astrophysics Data System (ADS)
Astawa, I. P.; Ardina, C.; Yasa, I. M. S.; Parnata, I. K.
2018-01-01
The concept of green accounting becomes a debate in terms of its implementation in a company. The result of previous studies indicates that there are no standard model regarding its implementation to support performance. The research aims to create a different green accounting model to other models by using local cultural elements as the variables in building it. The research is conducted in two steps. The first step is designing the model based on theoretical studies by considering the main and supporting elements in building the concept of green accounting. The second step is conducting a model test at 60 five stars hotels started with data collection through questionnaire and followed by data processing using descriptive statistic. The result indicates that the hotels’ owner has implemented green accounting attributes and it supports previous studies. Another result, which is a new finding, shows that the presence of local culture, government regulation, and the awareness of hotels’ owner has important role in the development of green accounting concept. The results of the research give contribution to accounting science in terms of green reporting. The hotel management should adopt local culture in building the character of accountant hired in the accounting department.
When the test of mediation is more powerful than the test of the total effect.
O'Rourke, Holly P; MacKinnon, David P
2015-06-01
Although previous research has studied power in mediation models, the extent to which the inclusion of a mediator will increase power has not been investigated. To address this deficit, in a first study we compared the analytical power values of the mediated effect and the total effect in a single-mediator model, to identify the situations in which the inclusion of one mediator increased statistical power. The results from this first study indicated that including a mediator increased statistical power in small samples with large coefficients and in large samples with small coefficients, and when coefficients were nonzero and equal across models. Next, we identified conditions under which power was greater for the test of the total mediated effect than for the test of the total effect in the parallel two-mediator model. These results indicated that including two mediators increased power in small samples with large coefficients and in large samples with small coefficients, the same pattern of results that had been found in the first study. Finally, we assessed the analytical power for a sequential (three-path) two-mediator model and compared the power to detect the three-path mediated effect to the power to detect both the test of the total effect and the test of the mediated effect for the single-mediator model. The results indicated that the three-path mediated effect had more power than the mediated effect from the single-mediator model and the test of the total effect. Practical implications of these results for researchers are then discussed.
ANFIS modeling for the assessment of landslide susceptibility for the Cameron Highland (Malaysia)
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet; Sezer, Ebru; Gokceoglu, Candan; Buchroithner, Manfred F.
2010-05-01
Landslides are one of the recurrent natural hazard problems throughout most of Malaysia. In landslide literature, there are several approaches such as probabilistic, bivariate and multivariate statistical models, fuzzy and artificial neural network models etc. However, a neuro-fuzzy application on the landslide susceptibility assessment has not been encountered in the literature. For this reason, this study presents the results of an adaptive neuro-fuzzy inference system (ANFIS) using remote sensing data and GIS for landslide susceptibility analysis in a part of the Cameron Highland areas in Malaysia. Landslide locations in the study area were identified by interpreting aerial photographs and satellite images, supported by extensive field surveys. Landsat TM satellite imagery was used to map vegetation index. Maps of topography, lineaments, NDVI and land cover were constructed from the spatial datasets. Seven landslide conditioning factors such as altitude, slope angle, curvature, distance from drainage, lithology, distance from faults and NDVI were extracted from the spatial database. These factors were analyzed using an ANFIS to produce the landslide susceptibility maps. During the model development works, total 5 landslide susceptibility models were constructed. For verification, the results of the analyses were then compared with the field-verified landslide locations. Additionally, the ROC curves for all landslide susceptibility models were drawn and the area under curve values were calculated. Landslide locations were used to validate results of the landslide susceptibility map and the verification results showed 97% accuracy for the model 5 employing all parameters produced in the present study as the landslide conditioning factors. The validation results showed sufficient agreement between the obtained susceptibility map and the existing data on landslide areas. Qualitatively, the model yields reasonable results which can be used for preliminary land-use planning purposes. As a final conclusion, the results revealed that the ANFIS modeling is a very useful and powerful tool for the regional landslide susceptibility assessments. However, the results to be obtained from the ANFIS modeling should be assessed carefully because the overlearning may cause misleading results. To prevent overlerning, the numbers of membership functions of inputs and the number of training epochs should be selected optimally and carefully.
Impacts of variability in geomechanical properties on hydrate bearing sediment responses
NASA Astrophysics Data System (ADS)
Lin, J. S.; Uchida, S.; Choi, J. H.; Seol, Y.
2017-12-01
Hydrate bearing sediments (HBS) may become unstable during the gas production operation, or from natural processes such as change in the landform or temperature. The geomechanical modeling is a rational way to assess HBS stability regardless of the process involved. At the present time, such modeling is laced with uncertainties. The uncertainties come from many sources that include the adequacy of a modeling framework to accurately project the response of HBS, the gap in the available field information, and the variability in the laboratory test results from limited samples. For a reasonable stability assessment, the impact of the various uncertainties have to be addressed. This study looks into one particular aspect of the uncertainty, namely, the uncertainty caused by the scatter in the laboratory tests and the ability of a constitutive model to adequately represent them. Specifically this study focuses on the scatter in the results from laboratory tests on high quality pressured core samples from a marine site, and use a critical state constitutive model to represent them. The study investigates how the HBS responses shift when the parameters of the constitutive model are varied to reflect the different aspects of experimental results. Also investigated are impacts on the responses by altering certain formulations of the constitutive model to suit particular sets of results.
This dataset supports the modeling study of Seltzer et al. (2016) published in Atmospheric Environment. In this study, techniques typically used for future air quality projections are applied to a historical 11-year period to assess the performance of the modeling system when the driving meteorological conditions are obtained using dynamical downscaling of coarse-scale fields without correcting toward higher resolution observations. The Weather Research and Forecasting model and the Community Multiscale Air Quality model are used to simulate regional climate and air quality over the contiguous United States for 2000-2010. The air quality simulations for that historical period are then compared to observations from four national networks. Comparisons are drawn between defined performance metrics and other published modeling results for predicted ozone, fine particulate matter, and speciated fine particulate matter. The results indicate that the historical air quality simulations driven by dynamically downscaled meteorology are typically within defined modeling performance benchmarks and are consistent with results from other published modeling studies using finer-resolution meteorology. This indicates that the regional climate and air quality modeling framework utilized here does not introduce substantial bias, which provides confidence in the method??s use for future air quality projections.This dataset is associated with the following publication:Seltzer, K., C
[Fast-track treatment--second revolution of colorectal surgery].
Kellokumpu, Ilmo
2012-01-01
The fast-track treatment model can be regarded as the second revolution of colorectal surgery after the introduction of laparoscopic surgery. In the gastro-surgical unit of the Central Hospital of Central Finland, results equivalent to international studies in colorectal surgery have been achieved by using fast-track model. In a study setting, this treatment model has resulted in significant decrease of total treatment costs and speeded up discharge of the patients from the hospital. The fast-track treatment model requires both a motivated, trained medical team and a motivated patient.
2014-09-01
very short time period and in this research, we model and study the effects of this rainfall on Taiwan?s coastal oceans as a result of river discharge...model and study the effects of this rainfall on Taiwan’s coastal oceans as a result of river discharge. We do this through the use of a river discharge... Effects of Footprint Shape on the Bulk Mixing Model . . . . . . . . . 57 4.2 Effects of the Horizontal Extent of the Bulk Mixing Model . . . . . . 59
Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model
NASA Astrophysics Data System (ADS)
Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John
2005-01-01
Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.
Neuscamman, Stephanie J.; Yu, Kristen L.
2016-05-01
The results of the National Atmospheric Release Advisory Center (NARAC) model simulations are compared to measured data from the Full-Scale Radiological Dispersal Device (FSRDD) field trials. The series of explosive radiological dispersal device (RDD) experiments was conducted in 2012 by Defence Research and Development Canada (DRDC) and collaborating organizations. During the trials, a wealth of data was collected, including a variety of deposition and air concentration measurements. The experiments were conducted with one of the stated goals being to provide measurements to atmospheric dispersion modelers. These measurements can be used to facilitate important model validation studies. For this study, meteorologicalmore » observations recorded during the tests are input to the diagnostic meteorological model, ADAPT, which provides 3–D, time-varying mean wind and turbulence fields to the LODI dispersion model. LODI concentration and deposition results are compared to the measured data, and the sensitivity of the model results to changes in input conditions (such as the particle activity size distribution of the source) and model physics (such as the rise of the buoyant cloud of explosive products) is explored. The NARAC simulations predicted the experimentally measured deposition results reasonably well considering the complexity of the release. Lastly, changes to the activity size distribution of the modeled particles can improve the agreement of the model results to measurement.« less
Shoreline Change and Storm-Induced Beach Erosion Modeling: A Collection of Seven Papers
1990-03-01
reducing, and analyzing the data in a systematic manner. Most physical data needed for evaluating and interpreting shoreline and beach evolution processes...proposed development concepts using both physical and numerical models. b. Analyzed and interpreted model results. c. Provided technical documentation of... interpret study results in the context required for "Confirmation" hearings. 26 The Corps of Engineers, Los Angeles District (SPL), has also begun studies
NASA Technical Reports Server (NTRS)
1973-01-01
The traffic analyses and system requirements data generated in the study resulted in the development of two traffic models; the baseline traffic model and the new traffic model. The baseline traffic model provides traceability between the numbers and types of geosynchronous missions considered in the study and the entire spectrum of missions foreseen in the total national space program. The information presented pertaining to the baseline traffic model includes: (1) definition of the baseline traffic model, including identification of specific geosynchronous missions and their payload delivery schedules through 1990; (2) Satellite location criteria, including the resulting distribution of the satellite population; (3) Geosynchronous orbit saturation analyses, including the effects of satellite physical proximity and potential electromagnetic interference; and (4) Platform system requirements analyses, including satellite and mission equipment descriptions, the options and limitations in grouping satellites, and on-orbit servicing criteria (both remotely controlled and man-attended).
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
The Effects of Recycling and Response Sensitivity on the Acquisition of Social Studies Concepts.
ERIC Educational Resources Information Center
Ford, Mary Jane; McKinney, C. Warren
1986-01-01
Two studies are reported which investigate the concept learning of 116 sixth graders (study 1) and 107 second graders (study 2) depending on the model of concept presentation. Results showed no difference between the structured Merrill and Tennyson model and adaptations of the model which were responsive to student's questions or recycled missed…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolejko, Krzysztof; Celerier, Marie-Noeelle; Laboratoire Univers et Theories
We use different particular classes of axially symmetric Szekeres Swiss-cheese models for the study of the apparent dimming of the supernovae of type Ia. We compare the results with those obtained in the corresponding Lemaitre-Tolman Swiss-cheese models. Although the quantitative picture is different the qualitative results are comparable, i.e., one cannot fully explain the dimming of the supernovae using small-scale ({approx}50 Mpc) inhomogeneities. To fit successfully the data we need structures of order of 500 Mpc size or larger. However, this result might be an artifact due to the use of axial light rays in axially symmetric models. Anyhow, thismore » work is a first step in trying to use Szekeres Swiss-cheese models in cosmology and it will be followed by the study of more physical models with still less symmetry.« less
NASA Astrophysics Data System (ADS)
Liu, C. M.
2017-12-01
Wave properties predicted by the rigid-lid and the free-surface Boussinesq equations for a two-fluid system are theoretically calculated and compared in this study. Boussinesq model is generally applied to numerically simulate surface waves in coastal regions to provide credible information for disaster prevention and breaker design. As for internal waves, Liu et al. (2008) and Liu (2016) respectively derived a free-surface model and a rigid-lid Boussinesq models for a two-fluid system. The former and the latter models respectively contain four and three key variables which may result in different results and efficiency while simulating. Therefore, present study shows the results theoretically measured by these two models to provide more detailed observation and useful information for motions of internal waves.
How sensitive are estimates of carbon fixation in agricultural models to input data?
2012-01-01
Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931
Diagnosing Model Errors in Simulations of Solar Radiation on Inclined Surfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-01
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results suggest that an isotropic transposition model developed by Badescu substantially underestimates diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can be used as amore » guide for future development of physics-based transposition models.« less
Diagnosing Model Errors in Simulation of Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-11-21
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models with one substantially underestimating the diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study canmore » be used as a guide for future development of physics-based transposition models.« less
Effect of suspension kinematic on 14 DOF vehicle model
NASA Astrophysics Data System (ADS)
Wongpattananukul, T.; Chantharasenawong, C.
2017-12-01
Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.
Fracture-Based Mesh Size Requirements for Matrix Cracks in Continuum Damage Mechanics Models
NASA Technical Reports Server (NTRS)
Leone, Frank A.; Davila, Carlos G.; Mabson, Gerald E.; Ramnath, Madhavadas; Hyder, Imran
2017-01-01
This paper evaluates the ability of progressive damage analysis (PDA) finite element (FE) models to predict transverse matrix cracks in unidirectional composites. The results of the analyses are compared to closed-form linear elastic fracture mechanics (LEFM) solutions. Matrix cracks in fiber-reinforced composite materials subjected to mode I and mode II loading are studied using continuum damage mechanics and zero-thickness cohesive zone modeling approaches. The FE models used in this study are built parametrically so as to investigate several model input variables and the limits associated with matching the upper-bound LEFM solutions. Specifically, the sensitivity of the PDA FE model results to changes in strength and element size are investigated.
GEWEX Cloud System Study (GCSS) Working Group on Cirrus Cloud Systems (WG2)
NASA Technical Reports Server (NTRS)
Starr, David
2002-01-01
Status, progress and plans will be given for current GCSS (GEWEX Cloud System Study) WG2 (Working Group on Cirrus Cloud Systems) projects, including: (a) the Idealized Cirrus Model Comparison Project, (b) the Cirrus Parcel Model Comparison Project (Phase 2), and (c) the developing Hurricane Nora extended outflow model case study project. Past results will be summarized and plans for the upcoming year described. Issues and strategies will be discussed. Prospects for developing improved cloud parameterizations derived from results of GCSS WG2 projects will be assessed. Plans for NASA's CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Layers - Florida Area Cirrus Experiment) potential opportunities for use of those data for WG2 model simulations (future projects) will be briefly described.
Zarrinabadi, Zarrin; Isfandyari-Moghaddam, Alireza; Erfani, Nasrolah; Tahour Soltani, Mohsen Ahmadi
2018-01-01
INTRODUCTION: According to the research mission of the librarianship and information sciences field, it is necessary to have the ability to communicate constructively between the user of the information and information in these students, and it appears more important in medical librarianship and information sciences because of the need for quick access to information for clinicians. Considering the role of spiritual intelligence in capability to establish effective and balanced communication makes it important to study this variable in librarianship and information students. One of the main factors that can affect the results of any research is conceptual model of measure variables. Accordingly, the purpose of this study was codification of spiritual intelligence measurement model. METHODS: This correlational study was conducted through structural equation model, and 270 students were opted from library and medical information students of nationwide medical universities by simple random sampling and responded to the King spiritual intelligence questionnaire (2008). Initially, based on the data, the model parameters were estimated using maximum likelihood method; then, spiritual intelligence measurement model was tested by fit indices. Data analysis was performed by Smart-Partial Least Squares software. RESULTS: Preliminary results showed that due to the positive indicators of predictive association and t-test results for spiritual intelligence parameters, the King measurement model has the acceptable fit and internal correlation of the questionnaire items was significant. Composite reliability and Cronbach's alpha of parameters indicated high reliability of spiritual intelligence model. CONCLUSIONS: The spiritual intelligence measurement model was evaluated, and results showed that the model has a good fit, so it is recommended that domestic researchers use this questionnaire to assess spiritual intelligence. PMID:29922688
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isbelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2004-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane-strain elements as well as three different generalized plane strain type approaches were performed. The computed skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with delamination length. For more accurate predictions, however, a three-dimensional analysis is required.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
Regression models for analyzing costs and their determinants in health care: an introductory review.
Gregori, Dario; Petrinco, Michele; Bo, Simona; Desideri, Alessandro; Merletti, Franco; Pagano, Eva
2011-06-01
This article aims to describe the various approaches in multivariable modelling of healthcare costs data and to synthesize the respective criticisms as proposed in the literature. We present regression methods suitable for the analysis of healthcare costs and then apply them to an experimental setting in cardiovascular treatment (COSTAMI study) and an observational setting in diabetes hospital care. We show how methods can produce different results depending on the degree of matching between the underlying assumptions of each method and the specific characteristics of the healthcare problem. The matching of healthcare cost models to the analytic objectives and characteristics of the data available to a study requires caution. The study results and interpretation can be heavily dependent on the choice of model with a real risk of spurious results and conclusions.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
Comparative Study Of Four Models Of Turbulence
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1996-01-01
Report presents comparative study of four popular eddy-viscosity models of turbulence. Computations reported for three different adverse pressure-gradient flowfields. Detailed comparison of numerical results and experimental data given. Following models tested: Baldwin-Lomax, Johnson-King, Baldwin-Barth, and Wilcox.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, D.S.; Kienzle, M.A.; Ferris, D.C.
1996-12-31
The objective of this study is to identify potential long-range sources of mercury within the southeast region of the United States. Preliminary results of a climatological study using the Short-range Layered Atmospheric Model (SLAM) transport model from a select source in the southeast U.S. are presented. The potential for long-range transport from Oak Ridge, Tennessee to Florida is discussed. The transport and transformation of mercury during periods of favorable transport to south Florida is modeled using the Organic Chemistry Integrated Dispersion (ORCHID) model, which contains the transport model used in the climatology study. SLAM/ORCHID results indicate the potential for mercurymore » reaching southeast Florida from the source and the atmospheric oxidation of mercury during transport.« less
On the coalescence-dispersion modeling of turbulent molecular mixing
NASA Technical Reports Server (NTRS)
Givi, Peyman; Kosaly, George
1987-01-01
The general coalescence-dispersion (C/D) closure provides phenomenological modeling of turbulent molecular mixing. The models of Curl and Dopazo and O'Brien appear as two limiting C/D models that bracket the range of results one can obtain by various models. This finding is used to investigate the sensitivtiy of the results to the choice of the model. Inert scalar mixing is found to be less model-sensitive than mixing accompanied by chemical reaction. Infinitely fast chemistry approximation is used to relate the C/D approach to Toor's earlier results. Pure mixing and infinite rate chemistry calculations are compared to study further a recent result of Hsieh and O'Brien who found that higher concentration moments are not sensitive to chemistry.
An Application of Satir's Model to Family Counseling.
ERIC Educational Resources Information Center
Seligman, Linda
1981-01-01
Describes the use of Virginia Satir's model to family counseling, emphasizing prevention, personal growth, self-esteem, and communication in improving the functioning of the family system. Presents a case study using the model. Results indicate the family became more nurturing as a result of counseling. (JAC)
NASA Astrophysics Data System (ADS)
Nilsson, H.
2012-11-01
This work presents an OpenFOAM case-study, based on the experimental studies of the swirling flow in the abrupt expansion by Dellenback et al.[1]. The case yields similar flow conditions as those of a helical vortex rope in a hydro turbine draft tube working at part-load. The case-study is set up similar to the ERCOFTAC Conical Diffuser and Centrifugal Pump OpenFOAM case-studies [2,3], making all the files available and the results fully reproducable using OpenSource software. The mesh generation is done using m4 scripting and the OpenFOAM built-in blockMesh mesh generator. The swirling inlet boundary condition is specified as an axi-symmetric profile. The outlet boundary condition uses the zeroGradient condition for all variables except for the pressure, which uses the fixed mean value boundary condition. The wall static pressure is probed at a number of locations during the simulations, and post-processing of the time-averaged solution is done using the OpenFOAM sample utility. Gnuplot scripts are provided for plotting the results. The computational results are compared to one of the operating conditions studied by Dellenback, and measurements for all the experimentally studied operating conditions are available in the case-study. Results from five cases are here presented, based on the kEpsilon model, the kOmegaSST model, and a filtered version of the same kOmegaSST model, named kOmegaSSTF [4,5]. Two different inlet boundary conditions are evaluated. It is shown that kEpsilon and kOmegaSST give steady solutions, while kOmegaSSTF gives a highly unsteady solution. The time-averaged solution of the kOmegaSSTF model is much more accurate than the other models. The kEpsilon and kOmegaSST models are thus unable to accurately model the effect of the large-scale unsteadiness, while kOmegaSSTF resolves those scales and models only the smaller scales. The use of two different boundary conditions shows that the boundary conditions are more important than the choice between kEpsilon and kOmegaSST, for the results just after the abrupt expansion.
Numerical study on turbulence modulation in gas-particle flows
NASA Astrophysics Data System (ADS)
Yan, F.; Lightstone, M. F.; Wood, P. E.
2007-01-01
A mathematical model is proposed based on the Eulerian/Lagrangian approach to account for both the particle crossing trajectory effect and the extra turbulence production due to particle wake effects. The resulting model, together with existing models from the literature, is applied to two different particle-laden flow configurations, namely a vertical pipe flow and axisymmetric downward jet flow. The results show that the proposed model is able to provide improved predictions of the experimental results.
A Study of the Optimal Model of the Flotation Kinetics of Copper Slag from Copper Mine BOR
NASA Astrophysics Data System (ADS)
Stanojlović, Rodoljub D.; Sokolović, Jovica M.
2014-10-01
In this study the effect of mixtures of copper slag and flotation tailings from copper mine Bor, Serbia on the flotation results of copper recovery and flotation kinetics parameters in a batch flotation cell has been investigated. By simultaneous adding old flotation tailings in the ball mill at the rate of 9%, it is possible to increase copper recovery for about 20%. These results are compared with obtained copper recovery of pure copper slag. The results of batch flotation test were fitted by MatLab software for modeling the first-order flotation kinetics in order to determine kinetics parameters and define an optimal model of the flotation kinetics. Six kinetic models are tested on the batch flotation copper recovery against flotation time. All models showed good correlation, however the modified Kelsall model provided the best fit.
2014-07-01
Unified Theory of Acceptance and Use of Technology, Structuration Model of Technology, UNCLASSIFIED DSTO-TR-2992 UNCLASSIFIED 5 Adaptive...Structuration Theory , Model of Mutual Adaptation, Model of Technology Appropriation, Diffusion/Implementation Model, and Tri-core Model, among others [11...simulation gaming essay/scenario writing genius forecasting role play/acting backcasting swot brainstorming relevance tree/logic chart scenario workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-19
... results of speciation data analyses, air quality modeling studies, chemical tracer studies, emission... Demonstration 1. Pollutants Addressed 2. Emission Inventory Requirements 3. Modeling 4. Reasonably Available... modeling (40 CFR 51.1007) that is performed in accordance with EPA modeling guidance (EPA-454/B-07-002...
The Community Multiscale Air Quality (CMAQ) / Plume-in-Grid (PinG) model was applied on a domain encompassing the greater Nashville, Tennessee region. Model simulations were performed for selected days in July 1995 during the Southern Oxidant Study (SOS) field study program wh...
Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2014-08-01
Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Assessing groundwater policy with coupled economic-groundwater hydrologic modeling
NASA Astrophysics Data System (ADS)
Mulligan, Kevin B.; Brown, Casey; Yang, Yi-Chen E.; Ahlfeld, David P.
2014-03-01
This study explores groundwater management policies and the effect of modeling assumptions on the projected performance of those policies. The study compares an optimal economic allocation for groundwater use subject to streamflow constraints, achieved by a central planner with perfect foresight, with a uniform tax on groundwater use and a uniform quota on groundwater use. The policies are compared with two modeling approaches, the Optimal Control Model (OCM) and the Multi-Agent System Simulation (MASS). The economic decision models are coupled with a physically based representation of the aquifer using a calibrated MODFLOW groundwater model. The results indicate that uniformly applied policies perform poorly when simulated with more realistic, heterogeneous, myopic, and self-interested agents. In particular, the effects of the physical heterogeneity of the basin and the agents undercut the perceived benefits of policy instruments assessed with simple, single-cell groundwater modeling. This study demonstrates the results of coupling realistic hydrogeology and human behavior models to assess groundwater management policies. The Republican River Basin, which overlies a portion of the Ogallala aquifer in the High Plains of the United States, is used as a case study for this analysis.
Modeling nutrient in-stream processes at the watershed scale using Nutrient Spiralling metrics
NASA Astrophysics Data System (ADS)
Marcé, R.; Armengol, J.
2009-01-01
One of the fundamental problems of using large-scale biogeochemical models is the uncertainty involved in aggregating the components of fine-scale deterministic models in watershed applications, and in extrapolating the results of field-scale measurements to larger spatial scales. Although spatial or temporal lumping may reduce the problem, information obtained during fine-scale research may not apply to lumped categories. Thus, the use of knowledge gained through fine-scale studies to predict coarse-scale phenomena is not straightforward. In this study, we used the nutrient uptake metrics defined in the Nutrient Spiralling concept to formulate the equations governing total phosphorus in-stream fate in a watershed-scale biogeochemical model. The rationale of this approach relies on the fact that the working unit for the nutrient in-stream processes of most watershed-scale models is the reach, the same unit used in field research based on the Nutrient Spiralling concept. Automatic calibration of the model using data from the study watershed confirmed that the Nutrient Spiralling formulation is a convenient simplification of the biogeochemical transformations involved in total phosphorus in-stream fate. Following calibration, the model was used as a heuristic tool in two ways. First, we compared the Nutrient Spiralling metrics obtained during calibration with results obtained during field-based research in the study watershed. The simulated and measured metrics were similar, suggesting that information collected at the reach scale during research based on the Nutrient Spiralling concept can be directly incorporated into models, without the problems associated with upscaling results from fine-scale studies. Second, we used results from our model to examine some patterns observed in several reports on Nutrient Spiralling metrics measured in impaired streams. Although these two exercises involve circular reasoning and, consequently, cannot validate any hypothesis, this is a powerful example of how models can work as heuristic tools to compare hypotheses and stimulate research in ecology.
Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.
Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C
2018-06-25
The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bin Hassan, M. F.; Bonello, P.
2017-05-01
Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.
Boerebach, Benjamin C. M.; Lombarts, Kiki M. J. M. H.; Scherpbier, Albert J. J.; Arah, Onyebuchi A.
2013-01-01
Background In fledgling areas of research, evidence supporting causal assumptions is often scarce due to the small number of empirical studies conducted. In many studies it remains unclear what impact explicit and implicit causal assumptions have on the research findings; only the primary assumptions of the researchers are often presented. This is particularly true for research on the effect of faculty’s teaching performance on their role modeling. Therefore, there is a need for robust frameworks and methods for transparent formal presentation of the underlying causal assumptions used in assessing the causal effects of teaching performance on role modeling. This study explores the effects of different (plausible) causal assumptions on research outcomes. Methods This study revisits a previously published study about the influence of faculty’s teaching performance on their role modeling (as teacher-supervisor, physician and person). We drew eight directed acyclic graphs (DAGs) to visually represent different plausible causal relationships between the variables under study. These DAGs were subsequently translated into corresponding statistical models, and regression analyses were performed to estimate the associations between teaching performance and role modeling. Results The different causal models were compatible with major differences in the magnitude of the relationship between faculty’s teaching performance and their role modeling. Odds ratios for the associations between teaching performance and the three role model types ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role, and from 2.8 to 13.8 for the person role. Conclusions Different sets of assumptions about causal relationships in role modeling research can be visually depicted using DAGs, which are then used to guide both statistical analysis and interpretation of results. Since study conclusions can be sensitive to different causal assumptions, results should be interpreted in the light of causal assumptions made in each study. PMID:23936020
SEASONAL NH 3 EMISSIONS FOR THE CONTINENTAL UNITED STATES: INVERSE MODEL ESTIMATION AND EVALUATION
An inverse modeling study has been conducted here to evaluate a prior estimate of seasonal ammonia (NH3) emissions. The prior estimates were based on a previous inverse modeling study and two other bottom-up inventory studies. The results suggest that the prior estim...
Study Circles at the Pharmacy--A New Model for Diabetes Education in Groups.
ERIC Educational Resources Information Center
Sarkadi, Anna; Rosenqvist, Urban
1999-01-01
Tests the feasibility of a one-year group education model for patients with type 2 diabetes in Sweden. Within study circles led by pharmacists, participants learned to self-monitor glucose, to interpret the results and to act upon them. Results show that study circles held at pharmacies are a feasible way of education persons with type 2 diabetes.…
Evaluation of internal noise methods for Hotelling observers
NASA Astrophysics Data System (ADS)
Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.
2005-04-01
Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.
Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.
Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N
2016-01-01
A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.
Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results
Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.
2016-01-01
A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. C. Griffith
In this project we provide an example of how to develop multi-tiered models to go across levels of biological organization to provide a framework for relating results of studies of low doses of ionizing radiation. This framework allows us to better understand how to extrapolate laboratory results to policy decisions, and to identify future studies that will increase confidence in policy decisions. In our application of the conceptual Model we were able to move across multiple levels of biological assessment for rodents going from molecular to organism level for in vitro and in vivo endpoints and to relate these tomore » human in vivo organism level effects. We used the rich literature on the effects of ionizing radiation on the developing brain in our models. The focus of this report is on disrupted neuronal migration due to radiation exposure and the structural and functional implications of these early biological effects. The cellular mechanisms resulting in pathogenesis are most likely due to a combination of the three mechanisms mentioned. For the purposes of a computational model, quantitative studies of low dose radiation effects on migration of neuronal progenitor cells in the cerebral mantle of experimental animals were used. In this project we were able to show now results from studies of low doses of radiation can be used in a multidimensional framework to construct linked models of neurodevelopment using molecular, cellular, tissue, and organ level studies conducted both in vitro and in vivo in rodents. These models could also be linked to behavioral endpoints in rodents which can be compared to available results in humans. The available data supported modeling to 10 cGy with limited data available at 5 cGy. We observed gradual but non-linear changes as the doses decreased. For neurodevelopment it appears that the slope of the dose response decreases from 25 cGy to 10 cGy. Future studies of neurodevelopment should be able to better define the dose response in this range.« less
Present research results and communicate the modeling results to science community
Background/Objectives. As a result of subsurface heterogeneity, many field and laboratory studies indicate that the advection-dispersion equation (ADE) model fails to describe the frequently observed long tails of contaminant concentration versus time in a breakthrough curve. T...
2014-09-01
within a very short time period and in this research, we model and study the effects of this rainfall on Taiwan?s coastal oceans as a result of river...and in this research, we model and study the effects of this rainfall on Taiwan’s coastal oceans as a result of river discharge. We do this through...54 4 Results 57 4.1 Effects of Footprint Shape on the Bulk Mixing Model . . . . . . . . . 57 4.2 Effects of the Horizontal Extent of the Bulk
A study of zodiacal light models
NASA Technical Reports Server (NTRS)
Gary, G. A.; Craven, P. D.
1973-01-01
A review is presented of the basic equations used in the analysis of photometric observations of zodiacal light. A survey of the methods used to model the zodiacal light in and out of the ecliptic is given. Results and comparison of various models are presented, as well as recent results by the authors.
Numerical study on anaerobic digestion of fruit and vegetable waste: Biogas generation
NASA Astrophysics Data System (ADS)
Wardhani, Puteri Kusuma; Watanabe, Masaji
2016-02-01
The study provides experimental results and numerical results concerning anaerobic digestion of fruit and vegetable waste. Experiments were carried out by using batch floating drum type digester without mixing and temperature setting. The retention time was 30 days. Numerical results based on Monod type model with influence of temperature is introduced. Initial value problems were analyzed numerically, while kinetic parameters were analyzed by using trial error methods. The numerical results for the first five days seems appropriate in comparison with the experimental outcomes. However, numerical results shows that the model is inappropriate for 30 days of fermentation. This leads to the conclusion that Monod type model is not suitable for describe the mixture degradation of fruit and vegetable waste and horse dung.
Postbuckling and Growth of Delaminations in Composite Plates Subjected to Axial Compression
NASA Technical Reports Server (NTRS)
Reeder, James R.; Chunchu, Prasad B.; Song, Kyongchan; Ambur, Damodar R.
2002-01-01
The postbuckling response and growth of circular delaminations in flat and curved plates are investigated as part of a study to identify the criticality of delamination locations through the laminate thickness. The experimental results from tests on delaminated plates are compared with finite element analysis results generated using shell models. The analytical prediction of delamination growth is obtained by assessing the strain energy release rate results from the finite element model and comparing them to a mixed-mode fracture toughness failure criterion. The analytical results for onset of delamination growth compare well with experimental results generated using a 3-dimensional displacement visualization system. The record of delamination progression measured in this study has resulted in a fully 3-dimensional test case with which progressive failure models can be validated.
NASA Astrophysics Data System (ADS)
Casella, Elisa; Rovere, Alessio; Pedroncini, Andrea; Mucerino, Luigi; Casella, Marco; Cusati, Luis Alberto; Vacchi, Matteo; Ferrari, Marco; Firpo, Marco
2014-08-01
Monitoring the impact of sea storms on coastal areas is fundamental to study beach evolution and the vulnerability of low-lying coasts to erosion and flooding. Modelling wave runup on a beach is possible, but it requires accurate topographic data and model tuning, that can be done comparing observed and modeled runup. In this study we collected aerial photos using an Unmanned Aerial Vehicle after two different swells on the same study area. We merged the point cloud obtained with photogrammetry with multibeam data, in order to obtain a complete beach topography. Then, on each set of rectified and georeferenced UAV orthophotos, we identified the maximum wave runup for both events recognizing the wet area left by the waves. We then used our topography and numerical models to simulate the wave runup and compare the model results to observed values during the two events. Our results highlight the potential of the methodology presented, which integrates UAV platforms, photogrammetry and Geographic Information Systems to provide faster and cheaper information on beach topography and geomorphology compared with traditional techniques without losing in accuracy. We use the results obtained from this technique as a topographic base for a model that calculates runup for the two swells. The observed and modeled runups are consistent, and open new directions for future research.
NASA Astrophysics Data System (ADS)
Munyaneza, O.; Mukubwa, A.; Maskey, S.; Uhlenbrook, S.; Wenninger, J.
2014-12-01
In the present study, we developed a catchment hydrological model which can be used to inform water resources planning and decision making for better management of the Migina Catchment (257.4 km2). The semi-distributed hydrological model HEC-HMS (Hydrologic Engineering Center - the Hydrologic Modelling System) (version 3.5) was used with its soil moisture accounting, unit hydrograph, liner reservoir (for baseflow) and Muskingum-Cunge (river routing) methods. We used rainfall data from 12 stations and streamflow data from 5 stations, which were collected as part of this study over a period of 2 years (May 2009 and June 2011). The catchment was divided into five sub-catchments. The model parameters were calibrated separately for each sub-catchment using the observed streamflow data. Calibration results obtained were found acceptable at four stations with a Nash-Sutcliffe model efficiency index (NS) of 0.65 on daily runoff at the catchment outlet. Due to the lack of sufficient and reliable data for longer periods, a model validation was not undertaken. However, we used results from tracer-based hydrograph separation from a previous study to compare our model results in terms of the runoff components. The model performed reasonably well in simulating the total flow volume, peak flow and timing as well as the portion of direct runoff and baseflow. We observed considerable disparities in the parameters (e.g. groundwater storage) and runoff components across the five sub-catchments, which provided insights into the different hydrological processes on a sub-catchment scale. We conclude that such disparities justify the need to consider catchment subdivisions if such parameters and components of the water cycle are to form the base for decision making in water resources planning in the catchment.
Rabideau, Dustin J; Pei, Pamela P; Walensky, Rochelle P; Zheng, Amy; Parker, Robert A
2018-02-01
The expected value of sample information (EVSI) can help prioritize research but its application is hampered by computational infeasibility, especially for complex models. We investigated an approach by Strong and colleagues to estimate EVSI by applying generalized additive models (GAM) to results generated from a probabilistic sensitivity analysis (PSA). For 3 potential HIV prevention and treatment strategies, we estimated life expectancy and lifetime costs using the Cost-effectiveness of Preventing AIDS Complications (CEPAC) model, a complex patient-level microsimulation model of HIV progression. We fitted a GAM-a flexible regression model that estimates the functional form as part of the model fitting process-to the incremental net monetary benefits obtained from the CEPAC PSA. For each case study, we calculated the expected value of partial perfect information (EVPPI) using both the conventional nested Monte Carlo approach and the GAM approach. EVSI was calculated using the GAM approach. For all 3 case studies, the GAM approach consistently gave similar estimates of EVPPI compared with the conventional approach. The EVSI behaved as expected: it increased and converged to EVPPI for larger sample sizes. For each case study, generating the PSA results for the GAM approach required 3 to 4 days on a shared cluster, after which EVPPI and EVSI across a range of sample sizes were evaluated in minutes. The conventional approach required approximately 5 weeks for the EVPPI calculation alone. Estimating EVSI using the GAM approach with results from a PSA dramatically reduced the time required to conduct a computationally intense project, which would otherwise have been impractical. Using the GAM approach, we can efficiently provide policy makers with EVSI estimates, even for complex patient-level microsimulation models.
ERIC Educational Resources Information Center
Gurl, Theresa
2010-01-01
In response to the recent calls for a residency model for field internships in education, a possible model based on an adaptation of Japanese lesson study is described. Lesson study consists of collaboratively planning, implementing, and discussing lessons after the lesson is taught. Results of a study in which student teachers and cooperating…
Habilomatis, George; Chaloulakou, Archontoula
2013-10-01
Recently, a branch of particulate matter research concerns on ultrafine particles found in the urban environment, which originate, to a significant extent, from traffic sources. In urban street canyons, dispersion of ultrafine particles affects pedestrian's short term exposure and resident's long term exposure as well. The aim of the present work is the development and the evaluation of a composite lattice Boltzmann model to study the dispersion of ultrafine particles, in urban street canyon microenvironment. The proposed model has the potential to penetrate into the physics of this complex system. In order to evaluate the model performance against suitable experimental data, ultrafine particles levels have been monitored on an hourly basis for a period of 35 days, in a street canyon, in Athens area. The results of the comparative analysis are quite satisfactory. Furthermore, our modeled results are in a good agreement with the results of other computational and experimental studies. This work is a first attempt to study the dispersion of an air pollutant by application of the lattice Boltzmann method. Copyright © 2013 Elsevier B.V. All rights reserved.
Calibrating cellular automaton models for pedestrians walking through corners
NASA Astrophysics Data System (ADS)
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hendriks, Rob F. A.; van den Akker, Jan J. A.
2017-04-01
Effectiveness of submerged drains in reducing subsidence of peat soils in agricultural use, and their effects on water management and nutrient loading of surface water: modelling of a case study in the western peat soil area of The Netherlands In the Netherlands, about 8% of the area is covered by peat soils. Most of these soils are in use for dairy farming and, consequently, are drained. Drainage causes decomposition of peat by oxidation and accordingly leads to surface subsidence and greenhouse gas emission. Submerged drains that enhance submerged infiltration of water from ditches during the dry and warm summer half year were, and are still, studied in The Netherlands as a promising tool for reducing peat decomposition by raising groundwater levels. For this purpose, several pilot field studies in the Western part of the Dutch peat area were conducted. Besides the effectiveness of submerged drains in reducing peat decomposition and subsidence by raising groundwater tables, some other relevant or expected effects of these drains were studied. Most important of these are water management and loading of surface water with nutrients nitrogen, phosphorus and sulphate. Because most of these parameters are not easy to assess and all of them are strongly depending on the meteorological conditions during the field studies some of these studies were modelled. The SWAP model was used for evaluating the hydrological results on groundwater table and water discharge and recharge. Effects of submerged drains were assessed by comparing the results of fields with and without drains. An empirical relation between deepest groundwater table and subsidence was used to convert effects on groundwater table to effects on subsidence. With the SWAP-ANIMO model nutrient loading of surface water was modelled on the basis of field results on nutrient concentrations . Calibrated models were used to assess effects in the present situation, as thirty-year averages, under extreme weather conditions and for two extreme climate scenarios of the Royal Netherlands Meteorological Institute. In this study the model results of one of the pilot studies are presented. The case study 'de Krimpenerwaard' is situated in the peat area in the "Green Heart" between the major cities of Amsterdam, The Hague, Rotterdam and Utrecht. Model results show a halving of soil subsidence, a strong increase of water recharge but a lower increase of water discharge, and generally small to moderate effects on nutrient loading , all depending (strongly) on meteorological conditions.
Evaluation of the ERP dispersion model using Darlington tracer-study data. Report No. 90-200-K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S.C.
1990-01-01
In this study, site-boundary atmospheric dilution factors calculated by the atmospheric dispersion model used in the ERP (Emergency Response Planning) computer code were compared to data collected during the Darlington tracer study. The purpose of this comparison was to obtain estimates of model uncertainty under a variety of conditions. This report provides background on ERP, the ERP dispersion model and the Darlington tracer study. Model evaluation techniques are discussed briefly, and the results of the comparison of model calculations with the field data are presented and reviewed.
Electromagnetic Heating in a Model of Frozen Red Blood Cells
1988-10-18
Evaluation of radio frequency energy deposition in a model of a standard blood bag was made using thermometric and thermographic dosimetry. The results...images corroborate the thermometric results, RECOMMENDATIONS The results of this study show the ability of an RF-coil irradiating... thermometric and thermographic dosimetry of RF-induced heating of the model. MATERIALS AND METHODS A standard, 800-ml (12 cm x 21 cm
ERIC Educational Resources Information Center
Sugiharto
2015-01-01
The aims of this research were to determine the effect of cooperative learning model and learning styles on learning result. This quasi-experimental study employed a 2x2 treatment by level, involved independent variables, i.e. cooperative learning model and learning styles, and learning result as the dependent variable. Findings signify that: (1)…
Numerical study of the direct pressure effect of acoustic waves in planar premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, H.; Jimenez, C.
Recently the unsteady response of 1-D premixed flames to acoustic pressure waves for the range of frequencies below and above the inverse of the flame transit time was investigated experimentally using OH chemiluminescence Wangher (2008). They compared the frequency dependence of the measured response to the prediction of an analytical model proposed by Clavin et al. (1990), derived from the standard flame model (one-step Arrhenius kinetics) and to a similar model proposed by McIntosh (1991). Discrepancies between the experimental results and the model led to the conclusion that the standard model does not provide an adequate description of the unsteadymore » response of real flames and that it is necessary to investigate more realistic chemical models. Here we follow exactly this suggestion and perform numerical studies of the response of lean methane flames using different reaction mechanisms. We find that the global flame response obtained with both detailed chemistry (GRI3.0) and a reduced multi-step model by Peters (1996) lies slightly above the predictions of the analytical model, but is close to experimental results. We additionally used an irreversible one-step Arrhenius reaction model and show the effect of the pressure dependence of the global reaction rate in the flame response. Our results suggest first that the current models have to be extended to capture the amplitude and phase results of the detailed mechanisms, and second that the correlation between the heat release and the measured OH* chemiluminescence should be studied deeper. (author)« less
Hosseinpour, Mehdi; Sahebi, Sina; Zamzuri, Zamira Hasanah; Yahaya, Ahmad Shukri; Ismail, Noriszura
2018-06-01
According to crash configuration and pre-crash conditions, traffic crashes are classified into different collision types. Based on the literature, multi-vehicle crashes, such as head-on, rear-end, and angle crashes, are more frequent than single-vehicle crashes, and most often result in serious consequences. From a methodological point of view, the majority of prior studies focused on multivehicle collisions have employed univariate count models to estimate crash counts separately by collision type. However, univariate models fail to account for correlations which may exist between different collision types. Among others, multivariate Poisson lognormal (MVPLN) model with spatial correlation is a promising multivariate specification because it not only allows for unobserved heterogeneity (extra-Poisson variation) and dependencies between collision types, but also spatial correlation between adjacent sites. However, the MVPLN spatial model has rarely been applied in previous research for simultaneously modelling crash counts by collision type. Therefore, this study aims at utilizing a MVPLN spatial model to estimate crash counts for four different multi-vehicle collision types, including head-on, rear-end, angle, and sideswipe collisions. To investigate the performance of the MVPLN spatial model, a two-stage model and a univariate Poisson lognormal model (UNPLN) spatial model were also developed in this study. Detailed information on roadway characteristics, traffic volume, and crash history were collected on 407 homogeneous segments from Malaysian federal roads. The results indicate that the MVPLN spatial model outperforms the other comparing models in terms of goodness-of-fit measures. The results also show that the inclusion of spatial heterogeneity in the multivariate model significantly improves the model fit, as indicated by the Deviance Information Criterion (DIC). The correlation between crash types is high and positive, implying that the occurrence of a specific collision type is highly associated with the occurrence of other crash types on the same road segment. These results support the utilization of the MVPLN spatial model when predicting crash counts by collision manner. In terms of contributing factors, the results show that distinct crash types are attributed to different subsets of explanatory variables. Copyright © 2018 Elsevier Ltd. All rights reserved.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Inter-sectoral comparison of model uncertainty of climate change impacts in Africa
NASA Astrophysics Data System (ADS)
van Griensven, Ann; Vetter, Tobias; Piontek, Franzisca; Gosling, Simon N.; Kamali, Bahareh; Reinhardt, Julia; Dinkneh, Aklilu; Yang, Hong; Alemayehu, Tadesse
2016-04-01
We present the model results and their uncertainties of an inter-sectoral impact model inter-comparison initiative (ISI-MIP) for climate change impacts in Africa. The study includes results on hydrological, crop and health aspects. The impact models used ensemble inputs consisting of 20 time series of daily rainfall and temperature data obtained from 5 Global Circulation Models (GCMs) and 4 Representative concentration pathway (RCP). In this study, we analysed model uncertainty for the Regional Hydrological Models, Global Hydrological Models, Malaria models and Crop models. For the regional hydrological models, we used 2 African test cases: the Blue Nile in Eastern Africa and the Niger in Western Africa. For both basins, the main sources of uncertainty are originating from the GCM and RCPs, while the uncertainty of the regional hydrological models is relatively low. The hydrological model uncertainty becomes more important when predicting changes on low flows compared to mean or high flows. For the other sectors, the impact models have the largest share of uncertainty compared to GCM and RCP, especially for Malaria and crop modelling. The overall conclusion of the ISI-MIP is that it is strongly advised to use ensemble modeling approach for climate change impact studies throughout the whole modelling chain.
Numerical simulations of atmospheric dispersion of iodine-131 by different models.
Leelőssy, Ádám; Mészáros, Róbert; Kovács, Attila; Lagzi, István; Kovács, Tibor
2017-01-01
Nowadays, several dispersion models are available to simulate the transport processes of air pollutants and toxic substances including radionuclides in the atmosphere. Reliability of atmospheric transport models has been demonstrated in several recent cases from local to global scale; however, very few actual emission data are available to evaluate model results in real-life cases. In this study, the atmospheric dispersion of 131I emitted to the atmosphere during an industrial process was simulated with different models, namely the WRF-Chem Eulerian online coupled model and the HYSPLIT and the RAPTOR Lagrangian models. Although only limited data of 131I detections has been available, the accuracy of modeled plume direction could be evaluated in complex late autumn weather situations. For the studied cases, the general reliability of models has been demonstrated. However, serious uncertainties arise related to low level inversions, above all in case of an emission event on 4 November 2011, when an important wind shear caused a significant difference between simulated and real transport directions. Results underline the importance of prudent interpretation of dispersion model results and the identification of weather conditions with a potential to cause large model errors.
Numerical simulations of atmospheric dispersion of iodine-131 by different models
Leelőssy, Ádám; Mészáros, Róbert; Kovács, Attila; Lagzi, István; Kovács, Tibor
2017-01-01
Nowadays, several dispersion models are available to simulate the transport processes of air pollutants and toxic substances including radionuclides in the atmosphere. Reliability of atmospheric transport models has been demonstrated in several recent cases from local to global scale; however, very few actual emission data are available to evaluate model results in real-life cases. In this study, the atmospheric dispersion of 131I emitted to the atmosphere during an industrial process was simulated with different models, namely the WRF-Chem Eulerian online coupled model and the HYSPLIT and the RAPTOR Lagrangian models. Although only limited data of 131I detections has been available, the accuracy of modeled plume direction could be evaluated in complex late autumn weather situations. For the studied cases, the general reliability of models has been demonstrated. However, serious uncertainties arise related to low level inversions, above all in case of an emission event on 4 November 2011, when an important wind shear caused a significant difference between simulated and real transport directions. Results underline the importance of prudent interpretation of dispersion model results and the identification of weather conditions with a potential to cause large model errors. PMID:28207853
Predictive modeling of nanomaterial exposure effects in biological systems
Liu, Xiong; Tang, Kaizhi; Harper, Stacey; Harper, Bryan; Steevens, Jeffery A; Xu, Roger
2013-01-01
Background Predictive modeling of the biological effects of nanomaterials is critical for industry and policymakers to assess the potential hazards resulting from the application of engineered nanomaterials. Methods We generated an experimental dataset on the toxic effects experienced by embryonic zebrafish due to exposure to nanomaterials. Several nanomaterials were studied, such as metal nanoparticles, dendrimer, metal oxide, and polymeric materials. The embryonic zebrafish metric (EZ Metric) was used as a screening-level measurement representative of adverse effects. Using the dataset, we developed a data mining approach to model the toxic endpoints and the overall biological impact of nanomaterials. Data mining techniques, such as numerical prediction, can assist analysts in developing risk assessment models for nanomaterials. Results We found several important attributes that contribute to the 24 hours post-fertilization (hpf) mortality, such as dosage concentration, shell composition, and surface charge. These findings concur with previous studies on nanomaterial toxicity using embryonic zebrafish. We conducted case studies on modeling the overall effect/impact of nanomaterials and the specific toxic endpoints such as mortality, delayed development, and morphological malformations. The results show that we can achieve high prediction accuracy for certain biological effects, such as 24 hpf mortality, 120 hpf mortality, and 120 hpf heart malformation. The results also show that the weighting scheme for individual biological effects has a significant influence on modeling the overall impact of nanomaterials. Sample prediction models can be found at http://neiminer.i-a-i.com/nei_models. Conclusion The EZ Metric-based data mining approach has been shown to have predictive power. The results provide valuable insights into the modeling and understanding of nanomaterial exposure effects. PMID:24098077
Kawai, Kosuke; Preaud, Emmanuelle; Baron-Papillon, Florence; Largeron, Nathalie; Acosta, Camilo J
2014-03-26
The objective of this study was to systematically review cost-effectiveness studies of vaccination against herpes zoster (HZ) and postherpetic neuralgia (PHN). We searched MEDLINE and EMBASE databases for eligible studies published prior to November 2013. We extracted information regarding model structure, model input parameters, and study results. We compared the results across studies by projecting the health and economic impacts of vaccinating one million adults over their lifetimes. We identified 15 cost-effectiveness studies performed in North America and Europe. Results ranged from approximately US$10,000 to more than US$100,000 per quality-adjusted life years (QALY) gained. Most studies in Europe concluded that zoster vaccination is likely to be cost-effective. Differences in results among studies are largely due to differing assumptions regarding duration of vaccine protection and a loss in quality of life associated with HZ and to a larger extent, PHN. Moreover, vaccine efficacy against PHN, age at vaccination, and vaccine cost strongly influenced the results in sensitivity analyses. Most studies included in this review shows that vaccination against HZ is likely to be cost-effective. Future research addressing key model parameters and cost-effectiveness studies in other parts of the world are needed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Ani, N.; Iqbal, M. M.
2018-03-01
Optimization is a process for finding parameter (parameters) that is (are) able to deliver an optimal value for an objective function. Seeking an optimal generic model for optimizing is a computer science study that has been being practically conducted by numerous researchers. Generic model is a model that can be technically operated to solve any varieties of optimization problem. By using an object-oriented method, the generic model for optimizing was constructed. Moreover, two types of optimization method, simulated-annealing and hill-climbing, were functioned in constructing the model and compared to find the most optimal one then. The result said that both methods gave the same result for a value of objective function and the hill-climbing based model consumed the shortest running time.
Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-02
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic panels. Following numerous studies comparing the performance of transposition models, this work aims to understand the quantitative uncertainty in state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models, with one substantially underestimating the diffuse plane-of-array irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of the empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can bemore » used as a guide for the future development of physics-based transposition models and evaluations of system performance.« less
A model study of bridge hydraulics
DOT National Transportation Integrated Search
2010-08-01
Most flood studies in the United States use the Army Corps of Engineers HEC-RAS (Hydrologic Engineering : Centers River Analysis System) computer program. This study was carried out to compare results of HEC-RAS : bridge modeling with laboratory e...
A model describing diffusion in prostate cancer.
Gilani, Nima; Malcolm, Paul; Johnson, Glyn
2017-07-01
Quantitative diffusion MRI has frequently been studied as a means of grading prostate cancer. Interpretation of results is complicated by the nature of prostate tissue, which consists of four distinct compartments: vascular, ductal lumen, epithelium, and stroma. Current diffusion measurements are an ill-defined weighted average of these compartments. In this study, prostate diffusion is analyzed in terms of a model that takes explicit account of tissue compartmentalization, exchange effects, and the non-Gaussian behavior of tissue diffusion. The model assumes that exchange between the cellular (ie, stromal plus epithelial) and the vascular and ductal compartments is slow. Ductal and cellular diffusion characteristics are estimated by Monte Carlo simulation and a two-compartment exchange model, respectively. Vascular pseudodiffusion is represented by an additional signal at b = 0. Most model parameters are obtained either from published data or by comparing model predictions with the published results from 41 studies. Model prediction error is estimated using 10-fold cross-validation. Agreement between model predictions and published results is good. The model satisfactorily explains the variability of ADC estimates found in the literature. A reliable model that predicts the diffusion behavior of benign and cancerous prostate tissue of different Gleason scores has been developed. Magn Reson Med 78:316-326, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Predicting language diversity with complex networks.
Raducha, Tomasz; Gubiec, Tomasz
2018-01-01
We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change.
A study comparison of two system model performance in estimated lifted index over Indonesia.
NASA Astrophysics Data System (ADS)
lestari, Juliana tri; Wandala, Agie
2018-05-01
Lifted index (LI) is one of atmospheric stability indices that used for thunderstorm forecasting. Numerical weather Prediction Models are essential for accurate weather forecast these day. This study has completed the attempt to compare the two NWP models these are Weather Research Forecasting (WRF) model and Global Forecasting System (GFS) model in estimates LI at 20 locations over Indonesia and verified the result with observation. Taylor diagram was used to comparing the models skill with shown the value of standard deviation, coefficient correlation and Root mean square error (RMSE). This study using the dataset on 00.00 UTC and 12.00 UTC during mid-March to Mid-April 2017. From the sample of LI distributions, both models have a tendency to overestimated LI value in almost all region in Indonesia while the WRF models has the better ability to catch the LI pattern distribution with observation than GFS model has. The verification result shows how both WRF and GFS model have such a weak relationship with observation except Eltari meteorologi station that its coefficient correlation reach almost 0.6 with the low RMSE value. Mean while WRF model have a better performance than GFS model. This study suggest that estimated LI of WRF model can provide the good performance for Thunderstorm forecasting over Indonesia in the future. However unsufficient relation between output models and observation in the certain location need a further investigation.
Studying the effect of weather conditions on daily crash counts using a discrete time-series model.
Brijs, Tom; Karlis, Dimitris; Wets, Geert
2008-05-01
In previous research, significant effects of weather conditions on car crashes have been found. However, most studies use monthly or yearly data and only few studies are available analyzing the impact of weather conditions on daily car crash counts. Furthermore, the studies that are available on a daily level do not explicitly model the data in a time-series context, hereby ignoring the temporal serial correlation that may be present in the data. In this paper, we introduce an integer autoregressive model for modelling count data with time interdependencies. The model is applied to daily car crash data, metereological data and traffic exposure data from the Netherlands aiming at examining the risk impact of weather conditions on the observed counts. The results show that several assumptions related to the effect of weather conditions on crash counts are found to be significant in the data and that if serial temporal correlation is not accounted for in the model, this may produce biased results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.; Davis, John
2016-03-01
In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.
New earth system model for optical performance evaluation of space instruments.
Ryu, Dongok; Kim, Sug-Whan; Breault, Robert P
2017-03-06
In this study, a new global earth system model is introduced for evaluating the optical performance of space instruments. Simultaneous imaging and spectroscopic results are provided using this global earth system model with fully resolved spatial, spectral, and temporal coverage of sub-models of the Earth. The sun sub-model is a Lambertian scattering sphere with a 6-h scale and 295 lines of solar spectral irradiance. The atmospheric sub-model has a 15-layer three-dimensional (3D) ellipsoid structure. The land sub-model uses spectral bidirectional reflectance distribution functions (BRDF) defined by a semi-empirical parametric kernel model. The ocean is modeled with the ocean spectral albedo after subtracting the total integrated scattering of the sun-glint scatter model. A hypothetical two-mirror Cassegrain telescope with a 300-mm-diameter aperture and 21.504 mm × 21.504-mm focal plane imaging instrument is designed. The simulated image results are compared with observational data from HRI-VIS measurements during the EPOXI mission for approximately 24 h from UTC Mar. 18, 2008. Next, the defocus mapping result and edge spread function (ESF) measuring result show that the distance between the primary and secondary mirror increases by 55.498 μm from the diffraction-limited condition. The shift of the focal plane is determined to be 5.813 mm shorter than that of the defocused focal plane, and this result is confirmed through the estimation of point spread function (PSF) measurements. This study shows that the earth system model combined with an instrument model is a powerful tool that can greatly help the development phase of instrument missions.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2 = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.
Comparison of Survival Models for Analyzing Prognostic Factors in Gastric Cancer Patients
Habibi, Danial; Rafiei, Mohammad; Chehrei, Ali; Shayan, Zahra; Tafaqodi, Soheil
2018-03-27
Objective: There are a number of models for determining risk factors for survival of patients with gastric cancer. This study was conducted to select the model showing the best fit with available data. Methods: Cox regression and parametric models (Exponential, Weibull, Gompertz, Log normal, Log logistic and Generalized Gamma) were utilized in unadjusted and adjusted forms to detect factors influencing mortality of patients. Comparisons were made with Akaike Information Criterion (AIC) by using STATA 13 and R 3.1.3 softwares. Results: The results of this study indicated that all parametric models outperform the Cox regression model. The Log normal, Log logistic and Generalized Gamma provided the best performance in terms of AIC values (179.2, 179.4 and 181.1, respectively). On unadjusted analysis, the results of the Cox regression and parametric models indicated stage, grade, largest diameter of metastatic nest, largest diameter of LM, number of involved lymph nodes and the largest ratio of metastatic nests to lymph nodes, to be variables influencing the survival of patients with gastric cancer. On adjusted analysis, according to the best model (log normal), grade was found as the significant variable. Conclusion: The results suggested that all parametric models outperform the Cox model. The log normal model provides the best fit and is a good substitute for Cox regression. Creative Commons Attribution License
The Glasgow-Maastricht foot model, evaluation of a 26 segment kinematic model of the foot.
Oosterwaal, Michiel; Carbes, Sylvain; Telfer, Scott; Woodburn, James; Tørholm, Søren; Al-Munajjed, Amir A; van Rhijn, Lodewijk; Meijer, Kenneth
2016-01-01
Accurately measuring of intrinsic foot kinematics using skin mounted markers is difficult, limited in part by the physical dimensions of the foot. Existing kinematic foot models solve this problem by combining multiple bones into idealized rigid segments. This study presents a novel foot model that allows the motion of the 26 bones to be individually estimated via a combination of partial joint constraints and coupling the motion of separate joints using kinematic rhythms. Segmented CT data from one healthy subject was used to create a template Glasgow-Maastricht foot model (GM-model). Following this, the template was scaled to produce subject-specific models for five additional healthy participants using a surface scan of the foot and ankle. Forty-three skin mounted markers, mainly positioned around the foot and ankle, were used to capture the stance phase of the right foot of the six healthy participants during walking. The GM-model was then applied to calculate the intrinsic foot kinematics. Distinct motion patterns where found for all joints. The variability in outcome depended on the location of the joint, with reasonable results for sagittal plane motions and poor results for transverse plane motions. The results of the GM-model were comparable with existing literature, including bone pin studies, with respect to the range of motion, motion pattern and timing of the motion in the studied joints. This novel model is the most complete kinematic model to date. Further evaluation of the model is warranted.
Construction and validation of a three-dimensional finite element model of degenerative scoliosis.
Zheng, Jie; Yang, Yonghong; Lou, Shuliang; Zhang, Dongsheng; Liao, Shenghui
2015-12-24
With the aging of the population, degenerative scoliosis (DS) incidence rate is increasing. In recent years, increasing research on this topic has been carried out, yet biomechanical research on the subject is seldom seen and in vitro biomechanical model of DS nearly cannot be available. The objective of this study was to develop and validate a complete three-dimensional finite element model of DS in order to build the digital platform for further biomechanical study. A 55-year-old female DS patient (Suer Pan, ID number was P141986) was selected for this study. This study was performed in accordance with the ethical standards of Declaration of Helsinki and its amendments and was approved by the local ethics committee (117 hospital of PLA ethics committee). Spiral computed tomography (CT) scanning was conducted on the patient's lumbar spine from the T12 to S1. CT images were then imported into a finite element modeling system. A three-dimensional solid model was then formed from segmentation of the CT scan. The three-dimensional model of each vertebra was then meshed, and material properties were assigned to each element according to the pathological characteristics of DS. Loads and boundary conditions were then applied in such a manner as to simulate in vitro biomechanical experiments conducted on lumbar segments. The results of the model were then compared with experimental results in order to validate the model. An integral three-dimensional finite element model of DS was built successfully, consisting of 113,682 solid elements, 686 cable elements, 33,329 shell elements, 4968 target elements, 4968 contact elements, totaling 157,635 elements, and 197,374 nodes. The model accurately described the physical features of DS and was geometrically similar to the object of study. The results of analysis with the finite element model agreed closely with in vitro experiments, validating the accuracy of the model. The three-dimensional finite element model of DS built in this study is clear, reliable, and effective for further biomechanical simulation study of DS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Himanshu; Palmintier, Bryan S; Krad, Ibrahim
This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solarmore » PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.« less
The development of a simulation model of the treatment of coronary heart disease.
Cooper, Keith; Davies, Ruth; Roderick, Paul; Chase, Debbie; Raftery, James
2002-11-01
A discrete event simulation models the progress of patients who have had a coronary event, through their treatment pathways and subsequent coronary events. The main risk factors in the model are age, sex, history of previous events and the extent of the coronary vessel disease. The model parameters are based on data collected from epidemiological studies of incidence and prognosis, efficacy studies. national surveys and treatment audits. The simulation results were validated against different sources of data. The initial results show that increasing revascularisation has considerable implications for resource use but has little impact on patient mortality.
The Effects of Science Models on Students' Understanding of Scientific Processes
NASA Astrophysics Data System (ADS)
Berglin, Riki Susan
This action research study investigated how the use of science models affected fifth-grade students' ability to transfer their science curriculum to a deeper understanding of scientific processes. This study implemented a variety of science models into a chemistry unit throughout a 6-week study. The research question addressed was: In what ways do using models to learn and teach science help students transfer classroom knowledge to a deeper understanding of the scientific processes? Qualitative and quantitative data were collected through pre- and post-science interest inventories, observations field notes, student work samples, focus group interviews, and chemistry unit tests. These data collection tools assessed students' attitudes, engagement, and content knowledge throughout their chemistry unit. The results of the data indicate that the model-based instruction program helped with students' engagement in the lessons and understanding of chemistry content. The results also showed that students displayed positive attitudes toward using science models.
Ferrets as Models for Influenza Virus Transmission Studies and Pandemic Risk Assessments
Barclay, Wendy; Barr, Ian; Fouchier, Ron A.M.; Matsuyama, Ryota; Nishiura, Hiroshi; Peiris, Malik; Russell, Charles J.; Subbarao, Kanta; Zhu, Huachen
2018-01-01
The ferret transmission model is extensively used to assess the pandemic potential of emerging influenza viruses, yet experimental conditions and reported results vary among laboratories. Such variation can be a critical consideration when contextualizing results from independent risk-assessment studies of novel and emerging influenza viruses. To streamline interpretation of data generated in different laboratories, we provide a consensus on experimental parameters that define risk-assessment experiments of influenza virus transmissibility, including disclosure of variables known or suspected to contribute to experimental variability in this model, and advocate adoption of more standardized practices. We also discuss current limitations of the ferret transmission model and highlight continued refinements and advances to this model ongoing in laboratories. Understanding, disclosing, and standardizing the critical parameters of ferret transmission studies will improve the comparability and reproducibility of pandemic influenza risk assessment and increase the statistical power and, perhaps, accuracy of this model. PMID:29774862
Modeling the CAPTEX Vertical Tracer Concentration Profiles.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.; Stunder, Barbara J. B.
1988-05-01
Perfluorocarbon tracer concentration profiles measured by aircraft 600-900 km downwind of the release locations during CAPTEX are discussed and compared with some model results. In general, the concentrations decreased with height in the upper half of the boundary layer where the aircraft measurements were made. The results of a model sensitivity study suggested that the shape of the profile was primarily due to winds increasing with height and relative position of the sampling with respect to the upwind and downwind edge of the plume. Further modeling studies showed that relatively simple vertical mixing parameterizations could account for the complex vertical plume structure when the model had sufficient vertical resolution. In general, the model performed better with slower winds and corresponding longer transport times.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Logical reasoning versus information processing in the dual-strategy model of reasoning.
Markovits, Henry; Brisson, Janie; de Chantal, Pier-Luc
2017-01-01
One of the major debates concerning the nature of inferential reasoning is between counterexample-based strategies such as mental model theory and statistical strategies underlying probabilistic models. The dual-strategy model, proposed by Verschueren, Schaeken, & d'Ydewalle (2005a, 2005b), which suggests that people might have access to both kinds of strategy has been supported by several recent studies. These have shown that statistical reasoners make inferences based on using information about premises in order to generate a likelihood estimate of conclusion probability. However, while results concerning counterexample reasoners are consistent with a counterexample detection model, these results could equally be interpreted as indicating a greater sensitivity to logical form. In order to distinguish these 2 interpretations, in Studies 1 and 2, we presented reasoners with Modus ponens (MP) inferences with statistical information about premise strength and in Studies 3 and 4, naturalistic MP inferences with premises having many disabling conditions. Statistical reasoners accepted the MP inference more often than counterexample reasoners in Studies 1 and 2, while the opposite pattern was observed in Studies 3 and 4. Results show that these strategies must be defined in terms of information processing, with no clear relations to "logical" reasoning. These results have additional implications for the underlying debate about the nature of human reasoning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Vukovic, Ana; Vujadinovic, Mirjam; Djurdjevic, Vladimir; Cvetkovic, Bojan; Djordjevic, Marija; Ruml, Mirjana; Rankovic-Vasic, Zorica; Przic, Zoran; Stojicic, Djurdja; Krzic, Aleksandra; Rajkovic, Borivoj
2015-04-01
Serbia is a country with relatively small scale terrain features with economy mostly based on local landowners' agricultural production. Climate change analysis must be downscaled accordingly, to recognize climatological features of the farmlands. Climate model simulations and impact studies significantly contribute to the future strategic planning in economic development and therefore impact analysis must be approached with high level of confidence. This paper includes research related to climate change and impacts in Serbia resulted from cooperative work of the modeling and user community. Dynamical downscaling of climate projections for the 21st century with multi-model approach and statistical bias correction are done in order to prepare model results for impact studies. Presented results are from simulations performed using regional EBU-POM model, which is forced with A1B and A2 SRES/IPCC (2007) with comparative analysis with other regional models and from the latest high resolution NMMB simulations forced with RCP8.5 IPCC scenario (2012). Application of bias correction of the model results is necessary when calculated indices are not linearly dependent on the model results and delta approach in presenting results with respect to present climate simulations is insufficient. This is most important during the summer over the north part of the country where model bias produce much higher temperatures and less precipitation, which is known as "summer drying problem" and is common in regional models' simulations over the Pannonian valley. Some of the results, which are already observed in present climate, like higher temperatures and disturbance in the precipitation pattern, lead to present and future advancement of the start of the vegetation period toward earlier dates, associated with an increased risk of the late spring frost, extended vegetation period, disturbed preparation for the rest period, increased duration and frequency of the draught periods, etc. Based on the projected climate changes an application is proposed of the ensemble seasonal forecasts for early preparation in case of upcoming unfavorable weather conditions. This paper was realized as a part of the projects "Studying climate change and its influence on the environment: impacts, adaptation and mitigation" (43007) and "Assessment of climate change impacts on water resources in Serbia" (37005) financed by the Ministry of Education and Science of the Republic of Serbia within the framework of integrated and interdisciplinary research for the period 2011-2015.
[Verification of the double dissociation model of shyness using the implicit association test].
Fujii, Tsutomu; Aikawa, Atsushi
2013-12-01
The "double dissociation model" of shyness proposed by Asendorpf, Banse, and Mtücke (2002) was demonstrated in Japan by Aikawa and Fujii (2011). However, the generalizability of the double dissociation model of shyness was uncertain. The present study examined whether the results reported in Aikawa and Fujii (2011) would be replicated. In Study 1, college students (n = 91) completed explicit self-ratings of shyness and other personality scales. In Study 2, forty-eight participants completed IAT (Implicit Association Test) for shyness, and their friends (n = 141) rated those participants on various personality scales. The results revealed that only the explicit self-concept ratings predicted other-rated low praise-seeking behavior, sociable behavior and high rejection-avoidance behavior (controlled shy behavior). Only the implicit self-concept measured by the shyness IAT predicted other-rated high interpersonal tension (spontaneous shy behavior). The results of this study are similar to the findings of the previous research, which supports generalizability of the double dissociation model of shyness.
[The effect of self-reflection on depression mediated by hardiness].
Nakajima, Miho; Hattori, Yosuke; Tanno, Yoshihiko
2015-10-01
Previous studies have shown that two types of private self-consciousness result in opposing effects on depression; one of which is self-rumination, which leads to maladaptive effect, and the other is self-reflection, which leads to an adaptive effect. Although a number of studies have examined the mechanism of the maladaptive effect of self-rumination, only a few studies have examined the mechanism of the adaptive effect of self-reflection. The present study examined the process of how self-reflection affected depression adaptively, Based on the previous findings, we proposed a hypothetical model assuming that hardiness acts as a mediator of self-reflection. To test the validity of the model, structural equation modeling analysis was performed with the cross-sectional data of 155 undergraduate students. The results. suggest that the hypothetical model is valid. According to the present results and previous findings, it is suggested that self-reflection is associated with low levels of depression and mediated by "rich commitment", one component of hardiness.
Three-dimensional dynamical and chemical modelling of the upper atmosphere
NASA Technical Reports Server (NTRS)
Prinn, R. G.; Alyea, F. N.; Cunnold, D. M.
1976-01-01
Progress in coding a 3-D upper atmospheric model and in modeling the ozone perturbation resulting from the shuttle booster exhaust is reported. A time-dependent version of a 2-D model was studied and the sulfur cycle in the stratosphere was investigated. The role of meteorology in influencing stratospheric composition measurements was also studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghil, M.; Kravtsov, S.; Robertson, A. W.
2008-10-14
This project was a continuation of previous work under DOE CCPP funding, in which we had developed a twin approach of probabilistic network (PN) models (sometimes called dynamic Bayesian networks) and intermediate-complexity coupled ocean-atmosphere models (ICMs) to identify the predictable modes of climate variability and to investigate their impacts on the regional scale. We had developed a family of PNs (similar to Hidden Markov Models) to simulate historical records of daily rainfall, and used them to downscale GCM seasonal predictions. Using an idealized atmospheric model, we had established a novel mechanism through which ocean-induced sea-surface temperature (SST) anomalies might influencemore » large-scale atmospheric circulation patterns on interannual and longer time scales; we had found similar patterns in a hybrid coupled ocean-atmosphere-sea-ice model. The goal of the this continuation project was to build on these ICM results and PN model development to address prediction of rainfall and temperature statistics at the local scale, associated with global climate variability and change, and to investigate the impact of the latter on coupled ocean-atmosphere modes. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling together with the development of associated software; new intermediate coupled models; a new methodology of inverse modeling for linking ICMs with observations and GCM results; and, observational studies of decadal and multi-decadal natural climate results, informed by ICM results.« less
NASA Astrophysics Data System (ADS)
ElSaadani, M.; Quintero, F.; Goska, R.; Krajewski, W. F.; Lahmers, T.; Small, S.; Gochis, D. J.
2015-12-01
This study examines the performance of different Hydrologic models in estimating peak flows over the state of Iowa. In this study I will compare the output of the Iowa Flood Center (IFC) hydrologic model and WRF-Hydro (NFIE configuration) to the observed flows at the USGS stream gauges. During the National Flood Interoperability Experiment I explored the performance of WRF-Hydro over the state of Iowa using different rainfall products and the resulting hydrographs showed a "flashy" behavior of the model output due to lack of calibration and bad initial flows due to short model spin period. I would like to expand this study by including a second well established hydrologic model and include more rain gauge vs. radar rainfall direct comparisons. The IFC model is expected to outperform WRF-Hydro's out of the box results, however, I will test different calibration options for both the Noah-MP land surface model and RAPID, which is the routing component of the NFIE-Hydro configuration, to see if this will improve the model results. This study will explore the statistical structure of model output uncertainties across scales (as a function of drainage areas and/or stream orders). I will also evaluate the performance of different radar-based Quantitative Precipitation Estimation (QPE) products (e.g. Stage IV, MRMS and IFC's NEXRAD based radar rainfall product. Different basins will be evaluated in this study and they will be selected based on size, amount of rainfall received over the basin area and location. Basin location will be an important factor in this study due to our prior knowledge of the performance of different NEXRAD radars that cover the region, this will help observe the effect of rainfall biases on stream flows. Another possible addition to this study is to apply controlled spatial error fields to rainfall inputs and observer the propagation of these errors through the stream network.
Comprehensive Analysis Modeling of Small-Scale UAS Rotors
NASA Technical Reports Server (NTRS)
Russell, Carl R.; Sekula, Martin K.
2017-01-01
Multicopter unmanned aircraft systems (UAS), or drones, have continued their explosive growth in recent years. With this growth comes demand for increased performance as the limits of existing technologies are reached. In order to better design multicopter UAS aircraft, better performance prediction tools are needed. This paper presents the results of a study aimed at using the rotorcraft comprehensive analysis code CAMRAD II to model a multicopter UAS rotor in hover. Parametric studies were performed to determine the level of fidelity needed in the analysis code inputs to achieve results that match test data. Overall, the results show that CAMRAD II is well suited to model small-scale UAS rotors in hover. This paper presents the results of the parametric studies as well as recommendations for the application of comprehensive analysis codes to multicopter UAS rotors.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
The guinea pig as an animal model for developmental and reproductive toxicology studies.
Rocca, Meredith S; Wehner, Nancy G
2009-04-01
Regulatory guidelines for developmental and reproductive toxicology (DART) studies require selection of "relevant" animal models as determined by kinetic, pharmacological, and toxicological data. Traditionally, rats, mice, and rabbits are the preferred animal models for these studies. However, for test articles that are pharmacologically inactive in the traditional animal models, the guinea pig may be a viable option. This choice should not be made lightly, as guinea pigs have many disadvantages compared to the traditional species, including limited historical control data, variability in pregnancy rates, small and variable litter size, long gestation, relative maturity at birth, and difficulty in dosing and breeding. This report describes methods for using guinea pigs in DART studies and provides results of positive and negative controls. Standard study designs and animal husbandry methods were modified to allow mating on the postpartum estrus in fertility studies and were used for producing cohorts of pregnant females for developmental studies. A positive control study with the pregnancy-disrupting agent mifepristone resulted in the anticipated failure of embryo implantation and supported the use of the guinea pig model. Control data for reproductive endpoints collected from 5 studies are presented. In cases where the traditional animal models are not relevant, the guinea pig can be used successfully for DART studies. (c) 2009 Wiley-Liss, Inc.
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
Host Model Uncertainty in Aerosol Radiative Effects: the AeroCom Prescribed Experiment and Beyond
NASA Astrophysics Data System (ADS)
Stier, Philip; Schutgens, Nick; Bian, Huisheng; Boucher, Olivier; Chin, Mian; Ghan, Steven; Huneeus, Nicolas; Kinne, Stefan; Lin, Guangxing; Myhre, Gunnar; Penner, Joyce; Randles, Cynthia; Samset, Bjorn; Schulz, Michael; Yu, Hongbin; Zhou, Cheng; Bellouin, Nicolas; Ma, Xiaoyan; Yu, Fangqun; Takemura, Toshihiko
2013-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. Multi-model "diversity" in estimates of the aerosol radiative effect is often perceived as a measure of the uncertainty in modelling aerosol itself. However, current aerosol models vary considerably in model components relevant for the calculation of aerosol radiative forcings and feedbacks and the associated "host-model uncertainties" are generally convoluted with the actual uncertainty in aerosol modelling. In the AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in eleven participating models. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention. However, uncertainties in aerosol radiative effects also include short-term and long-term feedback processes that will be systematically explored in future intercomparison studies. Here we will present an overview of the proposals for discussion and results from early scoping studies.
ERIC Educational Resources Information Center
Bock, Geoffrey; And Others
This segment of the national evaluation study of the Follow Through Planned Variation Model describes each of the 17 models represented in the study and reports the results of analyses of 4 years of student performance data for each model. First a purely descriptive synthesis of findings is presented for each model, with interpretation of the data…
A model study of aggregates composed of spherical soot monomers with an acentric carbon shell
NASA Astrophysics Data System (ADS)
Luo, Jie; Zhang, Yongming; Zhang, Qixing
2018-01-01
Influences of morphology on the optical properties of soot particles have gained increasing attentions. However, studies on the effect of the way primary particles are coated on the optical properties is few. Aimed to understand how the primary particles are coated affect the optical properties of soot particles, the coated soot particle was simulated using the acentric core-shell monomers model (ACM), which was generated by randomly moving the cores of concentric core-shell monomers (CCM) model. Single scattering properties of the CCM model with identical fractal parameters were calculated 50 times at first to evaluate the optical diversities of different realizations of fractal aggregates with identical parameters. The results show that optical diversities of different realizations for fractal aggregates with identical parameters cannot be eliminated by averaging over ten random realizations. To preserve the fractal characteristics, 10 realizations of each model were generated based on the identical 10 parent fractal aggregates, and then the results were averaged over each 10 realizations, respectively. The single scattering properties of all models were calculated using the numerically exact multiple-sphere T-matrix (MSTM) method. It is found that the single scattering properties of randomly coated soot particles calculated using the ACM model are extremely close to those using CCM model and homogeneous aggregate (HA) model using Maxwell-Garnett effective medium theory. Our results are different from previous studies. The reason may be that the differences in previous studies were caused by fractal characteristics but not models. Our findings indicate that how the individual primary particles are coated has little effect on the single scattering properties of soot particles with acentric core-shell monomers. This work provides a suggestion for scattering model simplification and model selection.
Vibration analysis of paper machine's asymmetric tube roll supported by spherical roller bearings
NASA Astrophysics Data System (ADS)
Heikkinen, Janne E.; Ghalamchi, Behnam; Viitala, Raine; Sopanen, Jussi; Juhanko, Jari; Mikkola, Aki; Kuosmanen, Petri
2018-05-01
This paper presents a simulation method that is used to study subcritical vibrations of a tube roll in a paper machine. This study employs asymmetric 3D beam elements based on the Timoshenko beam theory. An asymmetric beam model accounts for varying stiffness and mass distributions. Additionally, a detailed rolling element bearing model defines the excitations arising from the set of spherical roller bearings at both ends of the rotor. The results obtained from the simulation model are compared against the results from the measurements. The results indicate that the waviness of the bearing rolling surfaces contributes significantly to the subcritical vibrations while the asymmetric properties of the tube roll have only a fractional effect on the studied vibrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soulami, Ayoub; Lavender, Curt A.; Paxton, Dean M.
2015-06-15
Pacific Northwest National Laboratory (PNNL) has been investigating manufacturing processes for the uranium-10% molybdenum alloy plate-type fuel for high-performance research reactors in the United States. This work supports the U.S. Department of Energy National Nuclear Security Administration’s Office of Material Management and Minimization Reactor Conversion Program. This report documents modeling results of PNNL’s efforts to perform finite-element simulations to predict roll-separating forces for various rolling mill geometries for PNNL, Babcock & Wilcox Co., Y-12 National Security Complex, Los Alamos National Laboratory, and Idaho National Laboratory. The model developed and presented in a previous report has been subjected to further validationmore » study using new sets of experimental data generated from a rolling mill at PNNL. Simulation results of both hot rolling and cold rolling of uranium-10% molybdenum coupons have been compared with experimental results. The model was used to predict roll-separating forces at different temperatures and reductions for five rolling mills within the National Nuclear Security Administration Fuel Fabrication Capability project. This report also presents initial results of a finite-element model microstructure-based approach to study the surface roughness at the interface between zirconium and uranium-10% molybdenum.« less
Wright, Kevin B; King, Shawn; Rosenberg, Jenny
2014-01-01
This study investigated the influence of social support and self-verification on loneliness, depression, and stress among 477 college students. The authors propose and test a theoretical model using structural equation modeling. The results indicated empirical support for the model, with self-verification mediating the relation between social support and health outcomes. The results have implications for social support and self-verification research, which are discussed along with directions for future research and limitations of the study.
NASA Technical Reports Server (NTRS)
Picasso, G. O.; Basili, V. R.
1982-01-01
It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.
Electron kinematics in a plasma focus
NASA Technical Reports Server (NTRS)
Hohl, F.; Gary, S. P.
1977-01-01
The results of numerical integrations of the three-dimensional relativistic equations of motion of electrons subject to given electric and magnetic fields are presented. Fields due to two different models are studied: (1) a circular distribution of current filaments, and (2) a uniform current distribution; both the collapse and the current reduction phases are studied in each model. Decreasing current in the uniform current model yields 100 keV electrons accelerated toward the anode and, as for earlier ion computations, provides general agreement with experimental results.
NASA Technical Reports Server (NTRS)
Levison, W. H.; Baron, S.
1984-01-01
Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.
Research on the decision-making model of land-use spatial optimization
NASA Astrophysics Data System (ADS)
He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu
2009-10-01
Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.
Numerical Study of Mixing Thermal Conductivity Models for Nanofluid Heat Transfer Enhancement
NASA Astrophysics Data System (ADS)
Pramuanjaroenkij, A.; Tongkratoke, A.; Kakaç, S.
2018-01-01
Researchers have paid attention to nanofluid applications, since nanofluids have revealed their potentials as working fluids in many thermal systems. Numerical studies of convective heat transfer in nanofluids can be based on considering them as single- and two-phase fluids. This work is focused on improving the single-phase nanofluid model performance, since the employment of this model requires less calculation time and it is less complicated due to utilizing the mixing thermal conductivity model, which combines static and dynamic parts used in the simulation domain alternately. The in-house numerical program has been developed to analyze the effects of the grid nodes, effective viscosity model, boundary-layer thickness, and of the mixing thermal conductivity model on the nanofluid heat transfer enhancement. CuO-water, Al2O3-water, and Cu-water nanofluids are chosen, and their laminar fully developed flows through a rectangular channel are considered. The influence of the effective viscosity model on the nanofluid heat transfer enhancement is estimated through the average differences between the numerical and experimental results for the nanofluids mentioned. The nanofluid heat transfer enhancement results show that the mixing thermal conductivity model consisting of the Maxwell model as the static part and the Yu and Choi model as the dynamic part, being applied to all three nanofluids, brings the numerical results closer to the experimental ones. The average differences between those results for CuO-water, Al2O3-water, and CuO-water nanofluid flows are 3.25, 2.74, and 3.02%, respectively. The mixing thermal conductivity model has been proved to increase the accuracy of the single-phase nanofluid simulation and to reveal its potentials in the single-phase nanofluid numerical studies.
Watershed erosion modeling using the probability of sediment connectivity in a gently rolling system
NASA Astrophysics Data System (ADS)
Mahoney, David Tyler; Fox, James Forrest; Al Aamery, Nabil
2018-06-01
Sediment connectivity has been shown in recent years to explain how the watershed configuration controls sediment transport. However, we find no studies develop a watershed erosion modeling framework based on sediment connectivity, and few, if any, studies have quantified sediment connectivity for gently rolling systems. We develop a new predictive sediment connectivity model that relies on the intersecting probabilities for sediment supply, detachment, transport, and buffers to sediment transport, which is integrated in a watershed erosion model framework. The model predicts sediment flux temporally and spatially across a watershed using field reconnaissance results, a high-resolution digital elevation models, a hydrologic model, and shear-based erosion formulae. Model results validate the capability of the model to predict erosion pathways causing sediment connectivity. More notably, disconnectivity dominates the gently rolling watershed across all morphologic levels of the uplands, including, microtopography from low energy undulating surfaces across the landscape, swales and gullies only active in the highest events, karst sinkholes that disconnect drainage areas, and floodplains that de-couple the hillslopes from the stream corridor. Results show that sediment connectivity is predicted for about 2% or more the watershed's area 37 days of the year, with the remaining days showing very little or no connectivity. Only 12.8 ± 0.7% of the gently rolling watershed shows sediment connectivity on the wettest day of the study year. Results also highlight the importance of urban/suburban sediment pathways in gently rolling watersheds, and dynamic and longitudinal distributions of sediment connectivity might be further investigated in future work. We suggest the method herein provides the modeler with an added tool to account for sediment transport criteria and has the potential to reduce computational costs in watershed erosion modeling.
Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E. J.; Shefelbine, S. J.; Hutchinson, J. R.
2012-01-01
Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810
NASA Astrophysics Data System (ADS)
Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad
2016-09-01
Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM estimates was mapped for the studied city, and resulting concentration maps were consistent with PM recordings at the existing stations.
Impact of Learning Model Based on Cognitive Conflict toward Student’s Conceptual Understanding
NASA Astrophysics Data System (ADS)
Mufit, F.; Festiyed, F.; Fauzan, A.; Lufri, L.
2018-04-01
The problems that often occur in the learning of physics is a matter of misconception and low understanding of the concept. Misconceptions do not only happen to students, but also happen to college students and teachers. The existing learning model has not had much impact on improving conceptual understanding and remedial efforts of student misconception. This study aims to see the impact of cognitive-based learning model in improving conceptual understanding and remediating student misconceptions. The research method used is Design / Develop Research. The product developed is a cognitive conflict-based learning model along with its components. This article reports on product design results, validity tests, and practicality test. The study resulted in the design of cognitive conflict-based learning model with 4 learning syntaxes, namely (1) preconception activation, (2) presentation of cognitive conflict, (3) discovery of concepts & equations, (4) Reflection. The results of validity tests by some experts on aspects of content, didactic, appearance or language, indicate very valid criteria. Product trial results also show a very practical product to use. Based on pretest and posttest results, cognitive conflict-based learning models have a good impact on improving conceptual understanding and remediating misconceptions, especially in high-ability students.
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Minguet, Pierre J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The influence of two-dimensional finite element modeling assumptions on the debonding prediction for skin-stiffener specimens was investigated. Geometrically nonlinear finite element analyses using two-dimensional plane-stress and plane strain elements as well as three different generalized plane strain type approaches were performed. The computed deflections, skin and flange strains, transverse tensile stresses and energy release rates were compared to results obtained from three-dimensional simulations. The study showed that for strains and energy release rate computations the generalized plane strain assumptions yielded results closest to the full three-dimensional analysis. For computed transverse tensile stresses the plane stress assumption gave the best agreement. Based on this study it is recommended that results from plane stress and plane strain models be used as upper and lower bounds. The results from generalized plane strain models fall between the results obtained from plane stress and plane strain models. Two-dimensional models may also be used to qualitatively evaluate the stress distribution in a ply and the variation of energy release rates and mixed mode ratios with lamination length. For more accurate predictions, however, a three-dimensional analysis is required.
An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.
Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen
2018-06-01
This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Allen, John M.; Elbasiouny, Sherif M.
2018-06-01
Objective. Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. Approach. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Main results. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Significance. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
Manoharan, Prabu; Chennoju, Kiranmai; Ghoshal, Nanda
2015-07-01
BACE1 is an attractive target in Alzheimer's disease (AD) treatment. A rational drug design effort for the inhibition of BACE1 is actively pursued by researchers in both academic and pharmaceutical industries. This continued effort led to the steady accumulation of BACE1 crystal structures, co-complexed with different classes of inhibitors. This wealth of information is used in this study to develop target specific proteochemometric models and these models are exploited for predicting the prospective BACE1 inhibitors. The models developed in this study have performed excellently in predicting the computationally generated poses, separately obtained from single and ensemble docking approaches. The simple protein-ligand contact (SPLC) model outperforms other sophisticated high end models, in virtual screening performance, developed during this study. In an attempt to account for BACE1 protein active site flexibility information in predictive models, we included the change in the area of solvent accessible surface and the change in the volume of solvent accessible surface in our models. The ensemble and single receptor docking results obtained from this study indicate that the structural water mediated interactions improve the virtual screening results. Also, these waters are essential for recapitulating bioactive conformation during docking study. The proteochemometric models developed in this study can be used for the prediction of BACE1 inhibitors, during the early stage of AD drug discovery.
Evolutionary-based approaches for determining the deviatoric stress of calcareous sands
NASA Astrophysics Data System (ADS)
Shahnazari, Habib; Tutunchian, Mohammad A.; Rezvani, Reza; Valizadeh, Fatemeh
2013-01-01
Many hydrocarbon reservoirs are located near oceans which are covered by calcareous deposits. These sediments consist mainly of the remains of marine plants or animals, so calcareous soils can have a wide variety of engineering properties. Due to their local expansion and considerable differences from terrigenous soils, the evaluation of engineering behaviors of calcareous sediments has been a major concern for geotechnical engineers in recent years. Deviatoric stress is one of the most important parameters directly affecting important shearing characteristics of soils. In this study, a dataset of experimental triaxial tests was gathered from two sources. First, the data of previous experimental studies from the literature were gathered. Then, a series of triaxial tests was performed on calcareous sands of the Persian Gulf to develop the dataset. This work resulted in a large database of experimental results on the maximum deviatoric stress of different calcareous sands. To demonstrate the capabilities of evolutionary-based approaches in modeling the deviatoric stress of calcareous sands, two promising variants of genetic programming (GP), multigene genetic programming (MGP) and gene expression programming (GEP), were applied to propose new predictive models. The models' input parameters were the physical and in-situ condition properties of soil and the output was the maximum deviatoric stress (i.e., the axial-deviator stress). The results of statistical analyses indicated the robustness of these models, and a parametric study was also conducted for further verification of the models, in which the resulting trends were consistent with the results of the experimental study. Finally, the proposed models were further simplified by applying a practical geotechnical correlation.
Work-family enrichment and job performance: a constructive replication of affective events theory.
Carlson, Dawn; Kacmar, K Michele; Zivnuska, Suzanne; Ferguson, Merideth; Whitten, Dwayne
2011-07-01
Based on affective events theory (AET), we hypothesize a four-step model of the mediating mechanisms of positive mood and job satisfaction in the relationship between work-family enrichment and job performance. We test this model for both directions of enrichment (work-to-family and family-to-work). We used two samples to test the model using structural equation modeling. Results from Study 1, which included 240 full-time employees, were replicated in Study 2, which included 189 matched subordinate-supervisor dyads. For the work-to-family direction, results from both samples support our conceptual model and indicate mediation of the enrichment-performance relationship for the work-to-family direction of enrichment. For the family-to-work direction, results from the first sample support our conceptual model but results from the second sample do not. Our findings help elucidate mixed findings in the enrichment and job performance literatures and contribute to an understanding of the mechanisms linking these concepts. We conclude with a discussion of the practical and theoretical implications of our findings.
Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus
NASA Astrophysics Data System (ADS)
Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha
2018-02-01
A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.
1976-01-01
Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.
On the accuracy of models for predicting sound propagation in fitted rooms.
Hodgson, M
1990-08-01
The objective of this article is to make a contribution to the evaluation of the accuracy and applicability of models for predicting the sound propagation in fitted rooms such as factories, classrooms, and offices. The models studied are 1:50 scale models; the method-of-image models of Jovicic, Lindqvist, Hodgson, Kurze, and of Lemire and Nicolas; the emprical formula of Friberg; and Ondet and Barbry's ray-tracing model. Sound propagation predictions by the analytic models are compared with the results of sound propagation measurements in a 1:50 scale model and in a warehouse, both containing various densities of approximately isotropically distributed, rectangular-parallelepipedic fittings. The results indicate that the models of Friberg and of Lemire and Nicolas are fundamentally incorrect. While more generally applicable versions exist, the versions of the models of Jovicic and Kurze studied here are found to be of limited applicability since they ignore vertical-wall reflections. The Hodgson and Lindqvist models appear to be accurate in certain limited cases. This preliminary study found the ray-tracing model of Ondet and Barbry to be the most accurate of all the cases studied. Furthermore, it has the necessary flexibility with respect to room geometry, surface-absorption distribution, and fitting distribution. It appears to be the model with the greatest applicability to fitted-room sound propagation prediction.
Systematic Review of Model-Based Economic Evaluations of Treatments for Alzheimer's Disease.
Hernandez, Luis; Ozen, Asli; DosSantos, Rodrigo; Getsios, Denis
2016-07-01
Numerous economic evaluations using decision-analytic models have assessed the cost effectiveness of treatments for Alzheimer's disease (AD) in the last two decades. It is important to understand the methods used in the existing models of AD and how they could impact results, as they could inform new model-based economic evaluations of treatments for AD. The aim of this systematic review was to provide a detailed description on the relevant aspects and components of existing decision-analytic models of AD, identifying areas for improvement and future development, and to conduct a quality assessment of the included studies. We performed a systematic and comprehensive review of cost-effectiveness studies of pharmacological treatments for AD published in the last decade (January 2005 to February 2015) that used decision-analytic models, also including studies considering patients with mild cognitive impairment (MCI). The background information of the included studies and specific information on the decision-analytic models, including their approach and components, assumptions, data sources, analyses, and results, were obtained from each study. A description of how the modeling approaches and assumptions differ across studies, identifying areas for improvement and future development, is provided. At the end, we present our own view of the potential future directions of decision-analytic models of AD and the challenges they might face. The included studies present a variety of different approaches, assumptions, and scope of decision-analytic models used in the economic evaluation of pharmacological treatments of AD. The major areas for improvement in future models of AD are to include domains of cognition, function, and behavior, rather than cognition alone; include a detailed description of how data used to model the natural course of disease progression were derived; state and justify the economic model selected and structural assumptions and limitations; provide a detailed (rather than high-level) description of the cost components included in the model; and report on the face-, internal-, and cross-validity of the model to strengthen the credibility and confidence in model results. The quality scores of most studies were rated as fair to good (average 87.5, range 69.5-100, in a scale of 0-100). Despite the advancements in decision-analytic models of AD, there remain several areas of improvement that are necessary to more appropriately and realistically capture the broad nature of AD and the potential benefits of treatments in future models of AD.
The Effect of Integrated Learning Model and Critical Thinking Skill of Science Learning Outcomes
NASA Astrophysics Data System (ADS)
Fazriyah, N.; Supriyati, Y.; Rahayu, W.
2017-02-01
This study aimed to determine the effect of integrated learning model and critical thinking skill toward science learning outcomes. The study was conducted in SDN Kemiri Muka 1 Depok in fifth grade school year 2014/2015 using cluster random sampling was done to 80 students. Retrieval of data obtained through tests and analysis by Variance (ANOVA) and two lines with the design treatment by level 2x2. The results showed that: (1) science learning outcomes students that given thematic integrated learning model is higher than in the group of students given fragmented learning model, (2) there is an interaction effect between critical thinking skills with integrated learning model, (3) for students who have high critical thinking skills, science learning outcomes students who given by thematic integrated learning model higher than fragmented learning model and (4) for students who have the ability to think critically low yield higher learning science fragmented model. The results of this study indicate that thematic learning model with critical thinking skills can improve science learning outcomes of students.
A one-dimensional model for gas-solid heat transfer in pneumatic conveying
NASA Astrophysics Data System (ADS)
Smajstrla, Kody Wayne
A one-dimensional ODE model reduced from a two-fluid model of a higher dimensional order is developed to study dilute, two-phase (air and solid particles) flows with heat transfer in a horizontal pneumatic conveying pipe. Instead of using constant air properties (e.g., density, viscosity, thermal conductivity) evaluated at the initial flow temperature and pressure, this model uses an iteration approach to couple the air properties with flow pressure and temperature. Multiple studies comparing the use of constant or variable air density, viscosity, and thermal conductivity are conducted to study the impact of the changing properties to system performance. The results show that the fully constant property calculation will overestimate the results of the fully variable calculation by 11.4%, while the constant density with variable viscosity and thermal conductivity calculation resulted in an 8.7% overestimation, the constant viscosity with variable density and thermal conductivity overestimated by 2.7%, and the constant thermal conductivity with variable density and viscosity calculation resulted in a 1.2% underestimation. These results demonstrate that gas properties varying with gas temperature can have a significant impact on a conveying system and that the varying density accounts for the majority of that impact. The accuracy of the model is also validated by comparing the simulation results to the experimental values found in the literature.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.
Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed
2013-01-01
In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.
NASA Astrophysics Data System (ADS)
Huang, Pengnian; Li, Zhijia; Chen, Ji; Li, Qiaoling; Yao, Cheng
2016-11-01
To simulate the hydrological processes in semi-arid areas properly is still challenging. This study assesses the impact of different modeling strategies on simulating flood processes in semi-arid catchments. Four classic hydrological models, TOPMODEL, XINANJIANG (XAJ), SAC-SMA and TANK, were selected and applied to three semi-arid catchments in North China. Based on analysis and comparison of the simulation results of these classic models, four new flexible models were constructed and used to further investigate the suitability of various modeling strategies for semi-arid environments. Numerical experiments were also designed to examine the performances of the models. The results show that in semi-arid catchments a suitable model needs to include at least one nonlinear component to simulate the main process of surface runoff generation. If there are more than two nonlinear components in the hydrological model, they should be arranged in parallel, rather than in series. In addition, the results show that the parallel nonlinear components should be combined by multiplication rather than addition. Moreover, this study reveals that the key hydrological process over semi-arid catchments is the infiltration excess surface runoff, a non-linear component.
NASA Astrophysics Data System (ADS)
Nazari, B.; Seo, D.; Cannon, A.
2013-12-01
With many diverse features such as channels, pipes, culverts, buildings, etc., hydraulic modeling in urban areas for inundation mapping poses significant challenges. Identifying the practical extent of the details to be modeled in order to obtain sufficiently accurate results in a timely manner for effective emergency management is one of them. In this study we assess the tradeoffs between model complexity vs. information content for decision making in applying high-resolution hydrologic and hydraulic models for real-time flash flood forecasting and inundation mapping in urban areas. In a large urban area such as the Dallas-Fort Worth Metroplex (DFW), there exists very large spatial variability in imperviousness depending on the area of interest. As such, one may expect significant sensitivity of hydraulic model results to the resolution and accuracy of hydrologic models. In this work, we present the initial results from coupling of high-resolution hydrologic and hydraulic models for two 'hot spots' within the City of Fort Worth for real-time inundation mapping.
NASA Astrophysics Data System (ADS)
Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.
2011-11-01
Oil spill modeling is considered to be an important part of a decision support system (DeSS) for oil spill combatment and is useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas, implying that low resolution basin scale ocean models are of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the "Full City" accident on the Norwegian south coast and compare operational simulations from three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws, but by applying ocean forcing data of higher resolution (1.5 km resolution), the model system shows results that compare well with observations. The study also shows that an ensemble of results from the three different models is useful when predicting/analyzing oil spill in coastal areas.
A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling
NASA Astrophysics Data System (ADS)
Chen, L.; Gong, Y.; Shen, Z.
2015-11-01
Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.
Studies of biaxial mechanical properties and nonlinear finite element modeling of skin.
Shang, Xituan; Yen, Michael R T; Gaber, M Waleed
2010-06-01
The objective of this research is to conduct mechanical property studies of skin from two individual but potentially connected aspects. One is to determine the mechanical properties of the skin experimentally by biaxial tests, and the other is to use the finite element method to model the skin properties. Dynamic biaxial tests were performed on 16 pieces of abdominal skin specimen from rats. Typical biaxial stress-strain responses show that skin possesses anisotropy, nonlinearity and hysteresis. To describe the stress-strain relationship in forms of strain energy function, the material constants of each specimen were obtained and the results show a high correlation between theory and experiments. Based on the experimental results, a finite element model of skin was built to model the skin's special properties including anisotropy and nonlinearity. This model was based on Arruda and Boyce's eight-chain model and Bischoff et al.'s finite element model of skin. The simulation results show that the isotropic, nonlinear eight-chain model could predict the skin's anisotropic and nonlinear responses to biaxial loading by the presence of an anisotropic prestress state.
Building energy modeling for green architecture and intelligent dashboard applications
NASA Astrophysics Data System (ADS)
DeBlois, Justin
Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the representation of unpredictable occupancy patterns on model results. Combined, these studies inform modelers and researchers on frameworks for simulating holistically designed architecture and improving the interaction between models and building occupants, in residential and commercial settings. v
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
[Numerical simulation of the effect of virtual stent release pose on the expansion results].
Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike
2018-04-01
The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.
Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method
NASA Astrophysics Data System (ADS)
Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.
2018-03-01
The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis.
Gonzalez, Oscar; MacKinnon, David P
Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to an outcome. However, current methods do not allow researchers to study the relationships between general and specific aspects of a construct to an outcome simultaneously. This study proposes a bifactor measurement model for the mediating construct as a way to parse variance and represent the general aspect and specific facets of a construct simultaneously. Monte Carlo simulation results are presented to help determine the properties of mediated effect estimation when the mediator has a bifactor structure and a specific facet of a construct is the true mediator. This study also investigates the conditions when researchers can detect the mediated effect when the multidimensionality of the mediator is ignored and treated as unidimensional. Simulation results indicated that the mediation model with a bifactor mediator measurement model had unbiased and adequate power to detect the mediated effect with a sample size greater than 500 and medium a - and b -paths. Also, results indicate that parameter bias and detection of the mediated effect in both the data-generating model and the misspecified model varies as a function of the amount of facet variance represented in the mediation model. This study contributes to the largely unexplored area of measurement issues in statistical mediation analysis.
Calculation methods study on hot spot stress of new girder structure detail
NASA Astrophysics Data System (ADS)
Liao, Ping; Zhao, Renda; Jia, Yi; Wei, Xing
2017-10-01
To study modeling calculation methods of new girder structure detail's hot spot stress, based on surface extrapolation method among hot spot stress method, a few finite element analysis models of this welded detail were established by finite element software ANSYS. The influence of element type, mesh density, different local modeling methods of the weld toe and extrapolation methods was analyzed on hot spot stress calculation results at the toe of welds. The results show that the difference of the normal stress in the thickness direction and the surface direction among different models is larger when the distance from the weld toe is smaller. When the distance from the toe is greater than 0.5t, the normal stress of solid models, shell models with welds and non-weld shell models tends to be consistent along the surface direction. Therefore, it is recommended that the extrapolated point should be selected outside the 0.5t for new girder welded detail. According to the results of the calculation and analysis, shell models have good grid stability, and extrapolated hot spot stress of solid models is smaller than that of shell models. So it is suggested that formula 2 and solid45 should be carried out during the hot spot stress extrapolation calculation of this welded detail. For each finite element model under different shell modeling methods, the results calculated by formula 2 are smaller than those of the other two methods, and the results of shell models with welds are the largest. Under the same local mesh density, the extrapolated hot spot stress decreases gradually with the increase of the number of layers in the thickness direction of the main plate, and the variation range is within 7.5%.
NASA Astrophysics Data System (ADS)
Klimenko, M. V.; Klimenko, V. V.; Bessarab, F. S.; Korenkov, Yu N.; Liu, Hanli; Goncharenko, L. P.; Tolstikov, M. V.
2015-09-01
This paper presents a study of mesosphere and low thermosphere influence on ionospheric disturbances during 2009 major sudden stratospheric warming (SSW) event. This period was characterized by extremely low solar and geomagnetic activity. The study was performed using two first principal models: thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM) and global self-consistent model of thermosphere, ionosphere, and protonosphere (GSM TIP). The stratospheric anomalies during SSW event were modeled by specifying the temperature and density perturbations at the lower boundary of the TIME-GCM (30 km altitude) according to data from European Centre for Medium-Range Weather Forecasts. Then TIME-GCM output at 80 km was used as lower boundary conditions for driving GSM TIP model runs. We compare models' results with ground-based ionospheric data at low latitudes obtained by GPS receivers in the American longitudinal sector. GSM TIP simulation predicts the occurrence of the quasi-wave vertical structure in neutral temperature disturbances at 80-200 km altitude, and the positive and negative disturbances in total electron content at low latitude during the 2009 SSW event. According to our model results the formation mechanisms of the low-latitude ionospheric response are the disturbances in the n(O)/n(N2) ratio and thermospheric wind. The change in zonal electric field is key mechanism driving the ionospheric response at low latitudes, but our model results do not completely reproduce the variability in zonal electric fields (vertical plasma drift) at low latitudes.
Impacts of weighting climate models for hydro-meteorological climate change studies
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe; Caya, Daniel
2017-06-01
Weighting climate models is controversial in climate change impact studies using an ensemble of climate simulations from different climate models. In climate science, there is a general consensus that all climate models should be considered as having equal performance or in other words that all projections are equiprobable. On the other hand, in the impacts and adaptation community, many believe that climate models should be weighted based on their ability to better represent various metrics over a reference period. The debate appears to be partly philosophical in nature as few studies have investigated the impact of using weights in projecting future climate changes. The present study focuses on the impact of assigning weights to climate models for hydrological climate change studies. Five methods are used to determine weights on an ensemble of 28 global climate models (GCMs) adapted from the Coupled Model Intercomparison Project Phase 5 (CMIP5) database. Using a hydrological model, streamflows are computed over a reference (1961-1990) and future (2061-2090) periods, with and without post-processing climate model outputs. The impacts of using different weighting schemes for GCM simulations are then analyzed in terms of ensemble mean and uncertainty. The results show that weighting GCMs has a limited impact on both projected future climate in term of precipitation and temperature changes and hydrology in terms of nine different streamflow criteria. These results apply to both raw and post-processed GCM model outputs, thus supporting the view that climate models should be considered equiprobable.
A Micro-Mechanism-Based Continuum Corrosion Fatigue Damage Model for Steels
NASA Astrophysics Data System (ADS)
Sun, Bin; Li, Zhaoxia
2018-05-01
A micro-mechanism-based corrosion fatigue damage model is developed for studying the high-cycle corrosion fatigue of steel from multi-scale viewpoint. The developed physical corrosion fatigue damage model establishes micro-macro relationships between macroscopic continuum damage evolution and collective evolution behavior of microscopic pits and cracks, which can be used to describe the multi-scale corrosion fatigue process of steel. As a case study, the model is used to predict continuum damage evolution and number density of the corrosion pit and short crack of steel component in 5% NaCl water under constant stress amplitude at 20 kHz, and the numerical results are compared with experimental results. It shows that the model is effective and can be used to evaluate the continuum macroscopic corrosion fatigue damage and study microscopic corrosion fatigue mechanisms of steel.
A Micro-Mechanism-Based Continuum Corrosion Fatigue Damage Model for Steels
NASA Astrophysics Data System (ADS)
Sun, Bin; Li, Zhaoxia
2018-04-01
A micro-mechanism-based corrosion fatigue damage model is developed for studying the high-cycle corrosion fatigue of steel from multi-scale viewpoint. The developed physical corrosion fatigue damage model establishes micro-macro relationships between macroscopic continuum damage evolution and collective evolution behavior of microscopic pits and cracks, which can be used to describe the multi-scale corrosion fatigue process of steel. As a case study, the model is used to predict continuum damage evolution and number density of the corrosion pit and short crack of steel component in 5% NaCl water under constant stress amplitude at 20 kHz, and the numerical results are compared with experimental results. It shows that the model is effective and can be used to evaluate the continuum macroscopic corrosion fatigue damage and study microscopic corrosion fatigue mechanisms of steel.
A Roy model study of adapting to being HIV positive.
Perrett, Stephanie E; Biley, Francis C
2013-10-01
Roy's adaptation model outlines a generic process of adaptation useful to nurses in any situation where a patient is facing change. To advance nursing practice, nursing theories and frameworks must be constantly tested and developed through research. This article describes how the results of a qualitative grounded theory study have been used to test components of the Roy adaptation model. A framework for "negotiating uncertainty" was the result of a grounded theory study exploring adaptation to HIV. This framework has been compared to the Roy adaptation model, strengthening concepts such as focal and contextual stimuli, Roy's definition of adaptation and her description of adaptive modes, while suggesting areas for further development including the role of perception. The comparison described in this article demonstrates the usefulness of qualitative research in developing nursing models, specifically highlighting opportunities to continue refining Roy's work.
NASA Astrophysics Data System (ADS)
Bond-Lamberty, B. P.; Jones, A. D.; Shi, X.; Calvin, K. V.
2016-12-01
The C4MIP and CMIP5 model intercomparison projects (MIPs) highlighted uncertainties in climate projections, driven to a large extent by interactions between the terrestrial carbon cycle and climate feedbacks. In addition, the importance of feedbacks between human (energy and economic) systems and natural (carbon and climate) systems is poorly understood, and not considered in the previous MIP protocols. The experiments conducted under the previous Integrated Earth System Model (iESM) project, which coupled a earth system model with an integrated assessment model (GCAM), found that the inclusion of climate feedbacks on the terrestrial system in an RCP4.5 scenario increased ecosystem productivity, resulting in declines in cropland extent and increases in bioenergy production and forest cover. As a follow-up to these studies and to further understand climate-carbon cycle interactions and feedbacks, we examined the robustness of these results by running a suite of GCAM-only experiments using changes in ecosystem productivity derived from both the CMIP5 archive and the Agricultural Model Intercomparison Project. In our results, the effects of climate on yield in an RCP8.5 scenario tended to be more positive than those of AgMIP, but more negative than those of the other CMIP models. We discuss these results and the implications of model-to-model variability for integrated coupling studies of the future earth system.
Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S
2013-01-01
Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762
A progress report on seismic model studies
Healy, J.H.; Mangan, G.B.
1963-01-01
The value of seismic-model studies as an aid to understanding wave propagation in the Earth's crust was recognized by early investigators (Tatel and Tuve, 1955). Preliminary model results were very promising, but progress in model seismology has been restricted by two problems: (1) difficulties in the development of models with continuously variable velocity-depth functions, and (2) difficulties in the construction of models of adequate size to provide a meaningful wave-length to layer-thickness ratio. The problem of a continuously variable velocity-depth function has been partly solved by a technique using two-dimensional plate models constructed by laminating plastic to aluminum, so that the ratio of plastic to aluminum controls the velocity-depth function (Healy and Press, 1960). These techniques provide a continuously variable velocity-depth function, but it is not possible to construct such models large enough to study short-period wave propagation in the crust. This report describes improvements in our ability to machine large models. Two types of models are being used: one is a cylindrical aluminum tube machined on a lathe, and the other is a large plate machined on a precision planer. Both of these modeling techniques give promising results and are a significant improvement over earlier efforts.
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
NASA Astrophysics Data System (ADS)
Morath, D.; Sedlmayr, N.; Sirker, J.; Eggert, S.
2016-09-01
We study electron and spin transport in interacting quantum wires contacted by noninteracting leads. We theoretically model the wire and junctions as an inhomogeneous chain where the parameters at the junction change on the scale of the lattice spacing. We study such systems analytically in the appropriate limits based on Luttinger liquid theory and compare the results to quantum Monte Carlo calculations of the conductances and local densities near the junction. We first consider an inhomogeneous spinless fermion model with a nearest-neighbor interaction and then generalize our results to a spinful model with an on-site Hubbard interaction.
Sensitivity of a numerical wave model on wind re-analysis datasets
NASA Astrophysics Data System (ADS)
Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel
2017-03-01
Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.
Exploring predictive performance: A reanalysis of the geospace model transition challenge
NASA Astrophysics Data System (ADS)
Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.
2017-01-01
The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.
Automatic 3d Building Model Generations with Airborne LiDAR Data
NASA Astrophysics Data System (ADS)
Yastikli, N.; Cetin, Z.
2017-11-01
LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D building models can be generated successfully using raw LiDAR point cloud data.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.; Patel, J. S.; Justice, D. W.; Schweitzer, G. E.
1972-01-01
The results are presented of a study of the dynamics of a spinning Skylab space station. The stability of motion of several simplified models with flexible appendages was investigated. A digital simulation model that more accurately portrays the complex Skylab vehicle is described, and simulation results are compared with analytically derived results.
IRT Equating of the MCAT. MCAT Monograph.
ERIC Educational Resources Information Center
Hendrickson, Amy B.; Kolen, Michael J.
This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked…
Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...
Effects of Correctional-Based Programs for Female Inmates: A Systematic Review
ERIC Educational Resources Information Center
Tripodi, Stephen J.; Bledsoe, Sarah E.; Kim, Johnny S.; Bender, Kimberly
2011-01-01
Objective: To examine the effectiveness of interventions for incarcerated women. Method: The researchers use a two-model system: the risk-reduction model for studies analyzing interventions to reduce recidivism rates, and the enhancement model for studies that target psychological and physical well-being. Results: Incarcerated women who…
A Descriptive Study of Differing School Health Delivery Models
ERIC Educational Resources Information Center
Becker, Sherri I.; Maughan, Erin
2017-01-01
The purpose of this exploratory qualitative study was to identify and describe emerging models of school health services. Participants (N = 11) provided information regarding their models in semistructured phone interviews. Results identified a variety of funding sources as well as different staffing configurations and supervision. Strengths of…
Acoustical modeling study of the open test section of the NASA Langley V/STOL wind tunnel
NASA Technical Reports Server (NTRS)
Ver, I. L.; Andersen, D. W.; Bliss, D. B.
1975-01-01
An acoustic model study was carried out to identify effective sound absorbing treatment of strategically located surfaces in an open wind tunnel test section. Also an aerodynamic study done concurrently, sought to find measures to control low frequency jet pulsations which occur when the tunnel is operated in its open test section configuration. The acoustical modeling study indicated that lining of the raised ceiling and the test section floor immediately below it, results in a substantial improvement. The aerodynamic model study indicated that: (1) the low frequency jet pulsations are most likely caused or maintained by coupling of aerodynamic and aeroacoustic phenomena in the closed tunnel circuit, (2) replacing the hard collector cowl with a geometrically identical but porous fiber metal surface of 100 mks rayls flow resistance does not result in any noticable reduction of the test section noise caused by the impingement of the turbulent flow on the cowl.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
How Can Students Generalize the Chain Rule? The Roles of Abduction in Mathematical Modeling
ERIC Educational Resources Information Center
Park, Jin Hyeong; Lee, Kyeong-Hwa
2016-01-01
The purpose of this study is to design a modeling task to facilitate students' inquiries into the chain rule in calculus and to analyze the results after implementation of the task. In this study, we take a modeling approach to the teaching and learning of the chain rule by facilitating the generalization of students' models and modeling…
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
Sample Invariance of the Structural Equation Model and the Item Response Model: A Case Study.
ERIC Educational Resources Information Center
Breithaupt, Krista; Zumbo, Bruno D.
2002-01-01
Evaluated the sample invariance of item discrimination statistics in a case study using real data, responses of 10 random samples of 500 people to a depression scale. Results lend some support to the hypothesized superiority of a two-parameter item response model over the common form of structural equation modeling, at least when responses are…
A New Symptom Model for Autism Cross-Validated in an Independent Sample
ERIC Educational Resources Information Center
Boomsma, A.; Van Lang, N. D. J.; De Jonge, M. V.; De Bildt, A. A.; Van Engeland, H.; Minderaa, R. B.
2008-01-01
Background: Results from several studies indicated that a symptom model other than the DSM triad might better describe symptom domains of autism. The present study focused on a) investigating the stability of a new symptom model for autism by cross-validating it in an independent sample and b) examining the invariance of the model regarding three…
NASA Astrophysics Data System (ADS)
Filizola, Marta; Carteni-Farina, Maria; Perez, Juan J.
1999-07-01
3D models of the opioid receptors μ, δ and κ were constructed using BUNDLE, an in-house program to build de novo models of G-protein coupled receptors at the atomic level. Once the three opioid receptors were constructed and before any energy refinement, models were assessed for their compatibility with the results available from point-site mutations carried out on these receptors. In a subsequent step, three selective antagonists to each of three receptors (naltrindole, naltrexone and nor-binaltorphamine) were docked onto each of the three receptors and subsequently energy minimized. The nine resulting complexes were checked for their ability to explain known results of structure-activity studies. Once the models were validated, analysis of the distances between different residues of the receptors and the ligands were computed. This analysis permitted us to identify key residues tentatively involved in direct interaction with the ligand.
NASA Astrophysics Data System (ADS)
Elwina; Yunardi; Bindar, Yazid
2018-04-01
this paper presents results obtained from the application of a computational fluid dynamics (CFD) code Fluent 6.3 to modelling of temperature in propane flames with and without air preheat. The study focuses to investigate the effect of air preheat temperature on the temperature of the flame. A standard k-ε model and Eddy Dissipation model are utilized to represent the flow field and combustion of the flame being investigated, respectively. The results of calculations are compared with experimental data of propane flame taken from literature. The results of the study show that a combination of the standard k-ε turbulence model and eddy dissipation model is capable of producing reasonable predictions of temperature, particularly in axial profile of all three flames. Both experimental works and numerical simulation showed that increasing the temperature of the combustion air significantly increases the flame temperature.
NASA Astrophysics Data System (ADS)
Lai, J.-S.; Tsai, F.; Chiang, S.-H.
2016-06-01
This study implements a data mining-based algorithm, the random forests classifier, with geo-spatial data to construct a regional and rainfall-induced landslide susceptibility model. The developed model also takes account of landslide regions (source, non-occurrence and run-out signatures) from the original landslide inventory in order to increase the reliability of the susceptibility modelling. A total of ten causative factors were collected and used in this study, including aspect, curvature, elevation, slope, faults, geology, NDVI (Normalized Difference Vegetation Index), rivers, roads and soil data. Consequently, this study transforms the landslide inventory and vector-based causative factors into the pixel-based format in order to overlay with other raster data for constructing the random forests based model. This study also uses original and edited topographic data in the analysis to understand their impacts to the susceptibility modeling. Experimental results demonstrate that after identifying the run-out signatures, the overall accuracy and Kappa coefficient have been reached to be become more than 85 % and 0.8, respectively. In addition, correcting unreasonable topographic feature of the digital terrain model also produces more reliable modelling results.
Ong, Robert H.; King, Andrew J. C.; Mullins, Benjamin J.; Cooper, Timothy F.; Caley, M. Julian
2012-01-01
We present Computational Fluid Dynamics (CFD) models of the coupled dynamics of water flow, heat transfer and irradiance in and around corals to predict temperatures experienced by corals. These models were validated against controlled laboratory experiments, under constant and transient irradiance, for hemispherical and branching corals. Our CFD models agree very well with experimental studies. A linear relationship between irradiance and coral surface warming was evident in both the simulation and experimental result agreeing with heat transfer theory. However, CFD models for the steady state simulation produced a better fit to the linear relationship than the experimental data, likely due to experimental error in the empirical measurements. The consistency of our modelling results with experimental observations demonstrates the applicability of CFD simulations, such as the models developed here, to coral bleaching studies. A study of the influence of coral skeletal porosity and skeletal bulk density on surface warming was also undertaken, demonstrating boundary layer behaviour, and interstitial flow magnitude and temperature profiles in coral cross sections. Our models compliment recent studies showing systematic changes in these parameters in some coral colonies and have utility in the prediction of coral bleaching. PMID:22701582
Sidewalk undermining studies : phase III, field and model studies.
DOT National Transportation Integrated Search
1979-01-01
The results of the early studies of the undermining problems are summarized in the initial portion of this report. Additionally, the design and use of a model sidewalk for testing procedures for preventing undermining are described. Based upon tests ...
2014-10-01
Research Program (CCCRP). Provided in this Year 2 Annual Report are the results of our Phase I studies focused on characterizing the neuropathologic...effects of a single concussive impact to repeated concussive impacts using the PCI model. Phase I studies have been completed and these results set...the foundation for Phase II studies designed to evaluate the effects of repeated concussions that occur prior to and after the resolution of the
Numerical simulation of terrain-induced mesoscale circulation in the Chiang Mai area, Thailand
NASA Astrophysics Data System (ADS)
Sathitkunarat, Surachai; Wongwises, Prungchan; Pan-Aram, Rudklao; Zhang, Meigen
2008-11-01
The regional atmospheric modeling system (RAMS) was applied to Chiang Mai province, a mountainous area in Thailand, to study terrain-induced mesoscale circulations. Eight cases in wet and dry seasons under different weather conditions were analyzed to show thermal and dynamic impacts on local circulations. This is the first study of RAMS in Thailand especially investigating the effect of mountainous area on the simulated meteorological data. Analysis of model results indicates that the model can reproduce major features of local circulation and diurnal variations in temperatures. For evaluating the model performance, model results were compared with observed wind speed, wind direction, and temperature monitored at a meteorological tower. Comparison shows that the modeled values are generally in good agreement with observations and that the model captured many of the observed features.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Presenting the Students’ Academic Achievement Causal Model based on Goal Orientation
NASIRI, EBRAHIM; POUR-SAFAR, ALI; TAHERI, MAHDOKHT; SEDIGHI PASHAKY, ABDULLAH; ASADI LOUYEH, ATAOLLAH
2017-01-01
Introduction: Several factors play a role in academic achievement, individual's excellence and capability to do actions and tasks that the learner is in charge of in learning areas. The main goal of this study was to present academic achievement causal model based on the dimensions of goal orientation and learning approaches among the students of Medical Science and Dentistry courses in Guilan University of Medical Sciences in 2013. Methods: This study is based on a cross-sectional model. The participants included 175 first and second students of the Medical and Dentistry schools in Guilan University of Medical Sciences selected by random cluster sampling [121 persons (69%) Medical Basic Science students and 54 (30.9%) Dentistry students]. The measurement tool included the Goal Orientation Scale of Bouffard and Study Process Questionnaire of Biggs) and the students’ Grade Point Average. The study data were analyzed using Pearson correlation coefficient and structural equations modeling. SPSS 14 and Amos were used to analyze the data. Results: The results indicated a significant relationship between goal orientation and learning strategies (P<0.05). In addition, the results revealed that a significant relationship exists between learning strategies[Deep Learning (r=0.37, P<0.05), Surface Learning (r=-0.21,P<0.05)], and academic achievement.The suggested model of research is fitted to the data of the research. Conclusion: Results showed that the students' academic achievement model fits with experimental data, so it can be used in learning principles which lead to students’ achievement in learning. PMID:28979914
Retrieving Ice Basal Motion Using the Hydrologically Coupled JPL/UCI Ice Sheet System Model (ISSM)
NASA Astrophysics Data System (ADS)
Khakbaz, B.; Morlighem, M.; Seroussi, H. L.; Larour, E. Y.
2011-12-01
The study of basal sliding in ice sheets requires coupling ice-flow models with subglacial water flow. In fact, subglacial hydrology models can be used to model basal water-pressure explicitly and to generate basal sliding velocities. This study addresses the addition of a thin-film-based subglacial hydrologic module to the Ice Sheet System Model (ISSM) developed by JPL in collaboration with the University of California Irvine (UCI). The subglacial hydrology model follows the study of J. Johnson (2002) who assumed a non-arborscent distributed drainage system in the form of a thin film beneath ice sheets. The differential equation that arises from conservation of mass in the water system is solved numerically with the finite element method in order to obtain the spatial distribution of basal water over the study domain. The resulting sheet water thickness is then used to model the basal water-pressure and subsequently the basal sliding velocity. In this study, an introduction and preliminary results of the subglacial water flow and basal sliding velocity will be presented for the Pine Island Glacier west Antarctica.This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Modeling, Analysis and Prediction (MAP) Program.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
NASA Astrophysics Data System (ADS)
Al-Abadi, Alaa M.
2017-05-01
In recent years, delineation of groundwater productivity zones plays an increasingly important role in sustainable management of groundwater resource throughout the world. In this study, groundwater productivity index of northeastern Wasit Governorate was delineated using probabilistic frequency ratio (FR) and Shannon's entropy models in framework of GIS. Eight factors believed to influence the groundwater occurrence in the study area were selected and used as the input data. These factors were elevation (m), slope angle (degree), geology, soil, aquifer transmissivity (m2/d), storativity (dimensionless), distance to river (m), and distance to faults (m). In the first step, borehole location inventory map consisting of 68 boreholes with relatively high yield (>8 l/sec) was prepared. 47 boreholes (70 %) were used as training data and the remaining 21 (30 %) were used for validation. The predictive capability of each model was determined using relative operating characteristic technique. The results of the analysis indicate that the FR model with a success rate of 87.4 % and prediction rate 86.9 % performed slightly better than Shannon's entropy model with success rate of 84.4 % and prediction rate of 82.4 %. The resultant groundwater productivity index was classified into five classes using natural break classification scheme: very low, low, moderate, high, and very high. The high-very high classes for FR and Shannon's entropy models occurred within 30 % (217 km2) and 31 % (220 km2), respectively indicating low productivity conditions of the aquifer system. From final results, both of the models were capable to prospect GWPI with very good results, but FR was better in terms of success and prediction rates. Results of this study could be helpful for better management of groundwater resources in the study area and give planners and decision makers an opportunity to prepare appropriate groundwater investment plans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbante, Paolo; Frezzotti, Aldo; Gibelli, Livio
The unsteady evaporation of a thin planar liquid film is studied by molecular dynamics simulations of Lennard-Jones fluid. The obtained results are compared with the predictions of a diffuse interface model in which capillary Korteweg contributions are added to hydrodynamic equations, in order to obtain a unified description of the liquid bulk, liquid-vapor interface and vapor region. Particular care has been taken in constructing a diffuse interface model matching the thermodynamic and transport properties of the Lennard-Jones fluid. The comparison of diffuse interface model and molecular dynamics results shows that, although good agreement is obtained in equilibrium conditions, remarkable deviationsmore » of diffuse interface model predictions from the reference molecular dynamics results are observed in the simulation of liquid film evaporation. It is also observed that molecular dynamics results are in good agreement with preliminary results obtained from a composite model which describes the liquid film by a standard hydrodynamic model and the vapor by the Boltzmann equation. The two mathematical model models are connected by kinetic boundary conditions assuming unit evaporation coefficient.« less
Zhang, Jiafeng; Zhang, Pei; Fraser, Katharine H.; Griffith, Bartley P.; Wu, Zhongjun J.
2012-01-01
With the recent advances in computer technology, computational fluid dynamics (CFD) has become an important tool to design and improve blood contacting artificial organs, and to study the device-induced blood damage. Commercial CFD software packages are readily available, and multiple CFD models are provided by CFD software developers. However, the best approach of using CFD effectively to characterize fluid flow and to predict blood damage in these medical devices remains debatable. This study aimed to compare these CFD models and provide useful information on the accuracy of each model in modeling blood flow in circulatory assist devices. The laminar and five turbulence models (Spalart-Allmaras, k-ε (k-epsilon), k-ω (k-omega), SST (Menter’s Shear Stress Transport), and Reynolds Stress) were implemented to predict blood flow in a clinically used circulatory assist device, CentriMag® centrifugal blood pump (Thoratec, MA). In parallel, a transparent replica of the CentriMag® pump was constructed and selected views of the flow fields were measured with digital particle image velocimetry (DPIV). CFD results were compared with the DPIV experimental results. Compared with the experiment, all the selected CFD models predicted the flow pattern fairly well except the area of the outlet. However, quantitatively, the laminar model results were the most deviated from the experimental data. On the other hand, k-ε RNG models and Reynolds Stress model are the most accurate. In conclusion, for the circulatory assist devices, turbulence models provide more accurate results than laminar model. Among the selected turbulence models, k-ε and Reynolds Stress Method models are recommended. PMID:23441681
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Engineering Risk Assessment of Space Thruster Challenge Problem
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Mattenberger, Christopher J.; Go, Susie
2014-01-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center utilizes dynamic models with linked physics-of-failure analyses to produce quantitative risk assessments of space exploration missions. This paper applies the ERA approach to the baseline and extended versions of the PSAM Space Thruster Challenge Problem, which investigates mission risk for a deep space ion propulsion system with time-varying thruster requirements and operations schedules. The dynamic mission is modeled using a combination of discrete and continuous-time reliability elements within the commercially available GoldSim software. Loss-of-mission (LOM) probability results are generated via Monte Carlo sampling performed by the integrated model. Model convergence studies are presented to illustrate the sensitivity of integrated LOM results to the number of Monte Carlo trials. A deterministic risk model was also built for the three baseline and extended missions using the Ames Reliability Tool (ART), and results are compared to the simulation results to evaluate the relative importance of mission dynamics. The ART model did a reasonable job of matching the simulation models for the baseline case, while a hybrid approach using offline dynamic models was required for the extended missions. This study highlighted that state-of-the-art techniques can adequately adapt to a range of dynamic problems.
Ontogenetic loss of phenotypic plasticity of age at metamorphosis in tadpoles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, F.R.
1993-12-01
Amphibian larvae exhibit phenotypic plasticity in size at metamorphosis and duration of the larval period. I used Pseudacris crucifer tadpoles to test two models for predicting tadpole age and size at metamorphosis under changing environmental conditions. The Wilbur-Collins model states that metamorphosis is initiated as a function of a tadpole's size and relative growth rate, and predicts that changes in growth rate throughout the larval period affect age and size at metamorphosis. An alternative model, the fixed-rate model, states that age at metamorphosis is fixed early in larval life, and subsequent changes in growth rate will have no effect onmore » the length of the larval period. My results confirm that food supplies affect both age and size at metamorphosis, but developmental rates became fixed at approximately Gosner (1960) stages 35-37. Neither model completely predicted these results. I suggest that the generally accepted Wilbur-Collins model is improved by incorporating a point of fixed developmental timing. Growth trajectories predicted from this modified model fit the results of this study better than trajectories based on either of the original models. The results of this study suggests a constraint that limits the simultaneous optimization of age and size at metamorphosis. 32 refs., 5 figs., 1 tab.« less
Zarrinabadi, Zarrin; Isfandyari-Moghaddam, Alireza; Erfani, Nasrolah; Tahour Soltani, Mohsen Ahmadi
2018-01-01
According to the research mission of the librarianship and information sciences field, it is necessary to have the ability to communicate constructively between the user of the information and information in these students, and it appears more important in medical librarianship and information sciences because of the need for quick access to information for clinicians. Considering the role of spiritual intelligence in capability to establish effective and balanced communication makes it important to study this variable in librarianship and information students. One of the main factors that can affect the results of any research is conceptual model of measure variables. Accordingly, the purpose of this study was codification of spiritual intelligence measurement model. This correlational study was conducted through structural equation model, and 270 students were opted from library and medical information students of nationwide medical universities by simple random sampling and responded to the King spiritual intelligence questionnaire (2008). Initially, based on the data, the model parameters were estimated using maximum likelihood method; then, spiritual intelligence measurement model was tested by fit indices. Data analysis was performed by Smart-Partial Least Squares software. Preliminary results showed that due to the positive indicators of predictive association and t -test results for spiritual intelligence parameters, the King measurement model has the acceptable fit and internal correlation of the questionnaire items was significant. Composite reliability and Cronbach's alpha of parameters indicated high reliability of spiritual intelligence model. The spiritual intelligence measurement model was evaluated, and results showed that the model has a good fit, so it is recommended that domestic researchers use this questionnaire to assess spiritual intelligence.
Hansen, Maj; Armour, Cherie; Elklit, Ask
2012-01-01
Background Since the introduction of Acute Stress Disorder (ASD) into the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) research has focused on the ability of ASD to predict PTSD rather than focusing on addressing ASD's underlying latent structure. The few existing confirmatory factor analytic (CFA) studies of ASD have failed to reach a clear consensus regarding ASD's underlying dimensionality. Although, the discrepancy in the results may be due to varying ASD prevalence rates, it remains possible that the model capturing the latent structure of ASD has not yet been put forward. One such model may be a replication of a new five-factor model of PTSD, which separates the arousal symptom cluster into Dysphoric and Anxious Arousal. Given the pending DSM-5, uncovering ASD's latent structure is more pertinent than ever. Objective Using CFA, four different models of the latent structure of ASD were specified and tested: the proposed DSM-5 model, the DSM-IV model, a three factor model, and a five factor model separating the arousal symptom cluster. Method The analyses were based on a combined sample of rape and bank robbery victims, who all met the diagnostic criteria for ASD (N = 404) using the Acute Stress Disorder Scale. Results The results showed that the five factor model provided the best fit to the data. Conclusions The results of the present study suggest that the dimensionality of ASD may be best characterized as a five factor structure which separates dysphoric and anxious arousal items into two separate factors, akin to recent research on PTSD's latent structure. Thus, the current study adds to the debate about how ASD should be conceptualized in the pending DSM-5. PMID:22893845
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
Is Nursing a Viable Career for Blacks? (A Study of Black and White Freshman Nursing Students).
ERIC Educational Resources Information Center
Miller, Michael H.
It has been suggested that underrepresentation of blacks in professional nursing results from insufficient black-nurse role models. This study of 331 black and white freshman nursing students in three, two year, associate degree programs argues that blacks are not professional nurses for reasons other than a lack of role models. The results show…
Predicting language diversity with complex networks
Gubiec, Tomasz
2018-01-01
We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change. PMID:29702699
Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury.
Cheng, I; Zhang, L; Xu, X
2016-02-09
Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40-61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found.
Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury
Cheng, I.; Zhang, L.; Xu, X.
2016-01-01
Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40–61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found. PMID:26857835
Monte Carlo modeling of a conventional X-ray computed tomography scanner for gel dosimetry purposes.
Hayati, Homa; Mesbahi, Asghar; Nazarpoor, Mahmood
2016-01-01
Our purpose in the current study was to model an X-ray CT scanner with the Monte Carlo (MC) method for gel dosimetry. In this study, a conventional CT scanner with one array detector was modeled with use of the MCNPX MC code. The MC calculated photon fluence in detector arrays was used for image reconstruction of a simple water phantom as well as polyacrylamide polymer gel (PAG) used for radiation therapy. Image reconstruction was performed with the filtered back-projection method with a Hann filter and the Spline interpolation method. Using MC results, we obtained the dose-response curve for images of irradiated gel at different absorbed doses. A spatial resolution of about 2 mm was found for our simulated MC model. The MC-based CT images of the PAG gel showed a reliable increase in the CT number with increasing absorbed dose for the studied gel. Also, our results showed that the current MC model of a CT scanner can be used for further studies on the parameters that influence the usability and reliability of results, such as the photon energy spectra and exposure techniques in X-ray CT gel dosimetry.
NASA Astrophysics Data System (ADS)
Fischer, Bennet; Hopf, Barbara; Lindner, Markus; Koch, Alexander W.; Roths, Johannes
2017-04-01
A 3D FEM model of an FBG in a PANDA fiber with an extended fiber length of 25.4 mm is presented. Simulating long fiber lengths with limited computer power is achieved by using an iterative solver and by optimizing the FEM mesh. For verification purposes, the model is adapted to a configuration with transversal loads on the fiber. The 3D FEM model results correspond with experimental data and with the results of an additional 2D FEM plain strain model. In further studies, this 3D model shall be applied to more sophisticated situations, for example to study the temperature dependence of surface-glued or embedded FBGs in PANDA fibers that are used for strain-temperature decoupling.
Reusable Rocket Engine Operability Modeling and Analysis
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Komar, D. R.
1998-01-01
This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.
Modeling the Quiet Time Outflow Solution in the Polar Cap
NASA Technical Reports Server (NTRS)
Glocer, Alex
2011-01-01
We use the Polar Wind Outflow Model (PWOM) to study the geomagnetically quiet conditions in the polar cap during solar maximum, The PWOM solves the gyrotropic transport equations for O(+), H(+), and He(+) along several magnetic field lines in the polar region in order to reconstruct the full 3D solution. We directly compare our simulation results to the data based empirical model of Kitamura et al. [2011] of electron density, which is based on 63 months of Akebono satellite observations. The modeled ion and electron temperatures are also compared with a statistical compilation of quiet time data obtained by the EISCAT Svalbard Radar (ESR) and Intercosmos Satellites (Kitamura et al. [2011]). The data and model agree reasonably well. This study shows that photoelectrons play an important role in explaining the differences between sunlit and dark results, ion composition, as well as ion and electron temperatures of the quiet time polar wind solution. Moreover, these results provide validation of the PWOM's ability to model the quiet time ((background" solution.
Experimental and numerical investigations of sedimentation of porous wastewater sludge flocs.
Hriberšek, M; Zajdela, B; Hribernik, A; Zadravec, M
2011-02-01
The paper studies the properties and sedimentation characteristics of sludge flocs, as they appear in biological wastewater treatment (BWT) plants. The flocs are described as porous and permeable bodies, with their properties defined based on conducted experimental study. The derivation is based on established geometrical properties, high-speed camera data on settling velocities and non-linear numerical model, linking settling velocity with physical properties of porous flocs. The numerical model for derivation is based on generalized Stokes model, with permeability of the floc described by the Brinkman model. As a result, correlation for flocs porosity is obtained as a function of floc diameter. This data is used in establishing a CFD numerical model of sedimentation of flocs in test conditions, as recorded during experimental investigation. The CFD model is based on Euler-Lagrange formulation, where the Lagrange formulation is chosen for computation of flocs trajectories during sedimentation. The results of numerical simulations are compared with experimental results and very good agreement is observed. © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Amalia, R.; Sari, I. M.; Sinaga, P.
2017-02-01
This research depended by previous studies that only to find out the misconceptions of students without figuring out the mechanism of the misconceptions. The mechanism of misconceptions can be studied more deeply with mental models. The purpose of this study was to find students ‘mental models of heat convection and its relation with students conception on heat and temperature. The method used in this study is exploratory mixed method design that implemented in one of the high schools in Bandung. The results showed that 7 mental models of heat convection in Chiou’s study (2013), only first model (diffusion-based convention), third model (evenly distributed convection) and fifth model (warmness topped convection II) were found and model hybrid convection as a new mental model. In addition, no specific relationship between mental models and categories of students’ conceptions on heat and temperature.
Modeling of two-hot-arm horizontal thermal actuator
NASA Astrophysics Data System (ADS)
Yan, Dong; Khajepour, Amir; Mansour, Raafat
2003-03-01
Electrothermal actuators have a very promising future in MEMS applications since they can generate large deflection and force with low actuating voltages and small device areas. In this study, a lumped model of a two-hot-arm horizontal thermal actuator is presented. In order to prove the accuracy of the lumped model, finite element analysis (FEA) and experimental results are provided. The two-hot-arm thermal actuator has been fabricated using the MUMPs process. Both the experimental and FEA results are in good agreement with the results of lumped modeling.
Hansen, Trine Lund; Christensen, Thomas Højlund; Schmidt, Sonia
2006-04-01
Modelling of environmental impacts from the application of treated organic municipal solid waste (MSW) in agriculture differs widely between different models for environmental assessment of waste systems. In this comparative study five models were examined concerning quantification and impact assessment of environmental effects from land application of treated organic MSW: DST (Decision Support Tool, USA), IWM (Integrated Waste Management, U.K.), THE IFEU PROJECT (Germany), ORWARE (ORganic WAste REsearch, Sweden) and EASEWASTE (Environmental Assessment of Solid Waste Systems and Technologies, Denmark). DST and IWM are life cycle inventory (LCI) models, thus not performing actual impact assessment. The DST model includes only one water emission (biological oxygen demand) from compost leaching in the results and IWM considers only air emissions from avoided production of commercial fertilizers. THE IFEU PROJECT, ORWARE and EASEWASTE are life cycle assessment (LCA) models containing more detailed land application modules. A case study estimating the environmental impacts from land application of 1 ton of composted source sorted organic household waste was performed to compare the results from the different models and investigate the origin of any difference in type or magnitude of the results. The contributions from the LCI models were limited and did not depend on waste composition or local agricultural conditions. The three LCA models use the same overall approach for quantifying the impacts of the system. However, due to slightly different assumptions, quantification methods and environmental impact assessment, the obtained results varied clearly between the models. Furthermore, local conditions (e.g. soil type, farm type, climate and legal regulation) and waste composition strongly influenced the results of the environmental assessment.
Evaluation studies of the Regional Acid Deposition Model (RADM) results have revealed that there exists high bias of surface SO2 and O3 concentrations by the model, especially during nighttime hours. omparison of the RADM results with surface measurements of hourly ozone concentr...
The Influence of Atmosphere-Ocean Interaction on MJO Development and Propagation
2014-09-30
evaluate modeling results and process studies. The field phase of this project is associated with DYNAMO , which is the US contribution to the...influence on ocean temperature 4. Extended run for DYNAMO with high vertical resolution NCOM RESULTS Summary of project results The work funded...model experiments of the November 2011 MJO – the strongest MJO episode observed during the DYNAMO . The previous conceptual model that was based on TOGA
Body image concerns in professional fashion models: are they really an at-risk group?
Swami, Viren; Szmigielska, Emilia
2013-05-15
Although professional models are thought to be a high-risk group for body image concerns, only a handful of studies have empirically investigated this possibility. The present study sought to overcome this dearth of information by comparing professional models and a matched sample on key indices of body image and appeared-related concerns. A group of 52 professional fashion models was compared with a matched sample of 51 non-models from London, England, on indices of weight discrepancy, body appreciation, social physique anxiety, body dissatisfaction, drive for thinness, internalization of sociocultural messages about appearance, and dysfunctional investment in appearance. Results indicated that professional models only evidenced significantly higher drive for thinness and dysfunctional investment in appearance than the control group. Greater duration of engagement as a professional model was associated with more positive body appreciation but also greater drive for thinness. These results indicate that models, who are already underweight, have a strong desire to maintain their low body mass or become thinner. Taken together, the present results suggest that interventions aimed at promoting healthy body image among fashion models may require different strategies than those aimed at the general population. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
NASA Astrophysics Data System (ADS)
Kumar, Arvind; Walker, Mike J.; Sundarraj, Suresh; Dutta, Pradip
2011-08-01
In this article, a single-phase, one-domain macroscopic model is developed for studying binary alloy solidification with moving equiaxed solid phase, along with the associated transport phenomena. In this model, issues such as thermosolutal convection, motion of solid phase relative to liquid and viscosity variations of the solid-liquid mixture with solid fraction in the mobile zone are taken into account. Using the model, the associated transport phenomena during solidification of Al-Cu alloys in a rectangular cavity are predicted. The results for temperature variation, segregation patterns, and eutectic fraction distribution are compared with data from in-house experiments. The model predictions compare well with the experimental results. To highlight the influence of solid phase movement on convection and final macrosegregation, the results of the current model are also compared with those obtained from the conventional solidification model with stationary solid phase. By including the independent movement of the solid phase into the fluid transport model, better predictions of macrosegregation, microstructure, and even shrinkage locations were obtained. Mechanical property prediction models based on microstructure will benefit from the improved accuracy of this model.
Modeling effect of cover condition and soil type on rotavirus transport in surface flow.
Bhattarai, Rabin; Davidson, Paul C; Kalita, Prasanta K; Kuhlenschmidt, Mark S
2017-08-01
Runoff from animal production facilities contains various microbial pathogens which pose a health hazard to both humans and animals. Rotavirus is a frequently detected pathogen in agricultural runoff and the leading cause of death among children around the world. Diarrheal infection caused by rotavirus causes more than two million hospitalizations and death of more than 500,000 children every year. Very little information is available on the environmental factors governing rotavirus transport in surface runoff. The objective of this study is to model rotavirus transport in overland flow and to compare the model results with experimental observations. A physically based model, which incorporates the transport of infective rotavirus particles in both liquid (suspension or free-floating) and solid phase (adsorbed to soil particles), has been used in this study. Comparison of the model results with experimental results showed that the model could reproduce the recovery kinetics satisfactorily but under-predicted the virus recovery in a few cases when multiple peaks were observed during experiments. Similarly, the calibrated model had a good agreement between observed and modeled total virus recovery. The model may prove to be a promising tool for developing effective management practices for controlling microbial pathogens in surface runoff.
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Runoff forecasting using a Takagi-Sugeno neuro-fuzzy model with online learning
NASA Astrophysics Data System (ADS)
Talei, Amin; Chua, Lloyd Hock Chye; Quek, Chai; Jansson, Per-Erik
2013-04-01
SummaryA study using local learning Neuro-Fuzzy System (NFS) was undertaken for a rainfall-runoff modeling application. The local learning model was first tested on three different catchments: an outdoor experimental catchment measuring 25 m2 (Catchment 1), a small urban catchment 5.6 km2 in size (Catchment 2), and a large rural watershed with area of 241.3 km2 (Catchment 3). The results obtained from the local learning model were comparable or better than results obtained from physically-based, i.e. Kinematic Wave Model (KWM), Storm Water Management Model (SWMM), and Hydrologiska Byråns Vattenbalansavdelning (HBV) model. The local learning algorithm also required a shorter training time compared to a global learning NFS model. The local learning model was next tested in real-time mode, where the model was continuously adapted when presented with current information in real time. The real-time implementation of the local learning model gave better results, without the need for retraining, when compared to a batch NFS model, where it was found that the batch model had to be retrained periodically in order to achieve similar results.
Yun, Jian; Shang, Song-Chao; Wei, Xiao-Dan; Liu, Shuang; Li, Zhi-Jie
2016-01-01
Language is characterized by both ecological properties and social properties, and competition is the basic form of language evolution. The rise and decline of one language is a result of competition between languages. Moreover, this rise and decline directly influences the diversity of human culture. Mathematics and computer modeling for language competition has been a popular topic in the fields of linguistics, mathematics, computer science, ecology, and other disciplines. Currently, there are several problems in the research on language competition modeling. First, comprehensive mathematical analysis is absent in most studies of language competition models. Next, most language competition models are based on the assumption that one language in the model is stronger than the other. These studies tend to ignore cases where there is a balance of power in the competition. The competition between two well-matched languages is more practical, because it can facilitate the co-development of two languages. A third issue with current studies is that many studies have an evolution result where the weaker language inevitably goes extinct. From the integrated point of view of ecology and sociology, this paper improves the Lotka-Volterra model and basic reaction-diffusion model to propose an "ecology-society" computational model for describing language competition. Furthermore, a strict and comprehensive mathematical analysis was made for the stability of the equilibria. Two languages in competition may be either well-matched or greatly different in strength, which was reflected in the experimental design. The results revealed that language coexistence, and even co-development, are likely to occur during language competition.
NASA Astrophysics Data System (ADS)
Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.
2015-05-01
The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.
An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.
Wong, Rowena Syn Yin; Ismail, Noor Azina
2016-01-01
There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.
Xie, Hualin; Liu, Zhifei; Wang, Peng; Liu, Guiying; Lu, Fucai
2013-01-01
Ecological land is one of the key resources and conditions for the survival of humans because it can provide ecosystem services and is particularly important to public health and safety. It is extremely valuable for effective ecological management to explore the evolution mechanisms of ecological land. Based on spatial statistical analyses, we explored the spatial disparities and primary potential drivers of ecological land change in the Poyang Lake Eco-economic Zone of China. The results demonstrated that the global Moran’s I value is 0.1646 during the 1990 to 2005 time period and indicated significant positive spatial correlation (p < 0.05). The results also imply that the clustering trend of ecological land changes weakened in the study area. Some potential driving forces were identified by applying the spatial autoregressive model in this study. The results demonstrated that the higher economic development level and industrialization rate were the main drivers for the faster change of ecological land in the study area. This study also tested the superiority of the spatial autoregressive model to study the mechanisms of ecological land change by comparing it with the traditional linear regressive model. PMID:24384778
NASA Technical Reports Server (NTRS)
Dawson, Kenneth S.; Fortin, Paul E.
1987-01-01
The results of an integrated study of structures, aerodynamics, and controls using the STARS program on two advanced airplane configurations are presented. Results for the X-29A include finite element modeling, free vibration analyses, unsteady aerodynamic calculations, flutter/divergence analyses, and an aeroservoelastic controls analysis. Good correlation is shown between STARS results and various other verified results. The tasks performed on the Oblique Wing Research Aircraft include finite element modeling and free vibration analyses.
Large-Signal Lyapunov-Based Stability Analysis of DC/AC Inverters and Inverter-Based Microgrids
NASA Astrophysics Data System (ADS)
Kabalan, Mahmoud
Microgrid stability studies have been largely based on small-signal linearization techniques. However, the validity and magnitude of the linearization domain is limited to small perturbations. Thus, there is a need to examine microgrids with large-signal nonlinear techniques to fully understand and examine their stability. Large-signal stability analysis can be accomplished by Lyapunov-based mathematical methods. These Lyapunov methods estimate the domain of asymptotic stability of the studied system. A survey of Lyapunov-based large-signal stability studies showed that few large-signal studies have been completed on either individual systems (dc/ac inverters, dc/dc rectifiers, etc.) or microgrids. The research presented in this thesis addresses the large-signal stability of droop-controlled dc/ac inverters and inverter-based microgrids. Dc/ac power electronic inverters allow microgrids to be technically feasible. Thus, as a prelude to examining the stability of microgrids, the research presented in Chapter 3 analyzes the stability of inverters. First, the 13 th order large-signal nonlinear model of a droop-controlled dc/ac inverter connected to an infinite bus is presented. The singular perturbation method is used to decompose the nonlinear model into 11th, 9th, 7th, 5th, 3rd and 1st order models. Each model ignores certain control or structural components of the full order model. The aim of the study is to understand the accuracy and validity of the reduced order models in replicating the performance of the full order nonlinear model. The performance of each model is studied in three different areas: time domain simulations, Lyapunov's indirect method and domain of attraction estimation. The work aims to present the best model to use in each of the three domains of study. Results show that certain reduced order models are capable of accurately reproducing the performance of the full order model while others can be used to gain insights into those three areas of study. This will enable future studies to save computational effort and produce the most accurate results according to the needs of the study being performed. Moreover, the effect of grid (line) impedance on the accuracy of droop control is explored using the 5th order model. Simulation results show that traditional droop control is valid up to R/X line impedance value of 2. Furthermore, the 3rd order nonlinear model improves the currently available inverter-infinite bus models by accounting for grid impedance, active power-frequency droop and reactive power-voltage droop. Results show the 3rd order model's ability to account for voltage and reactive power changes during a transient event. Finally, the large-signal Lyapunov-based stability analysis is completed for a 3 bus microgrid system (made up of 2 inverters and 1 linear load). The thesis provides a systematic state space large-signal nonlinear mathematical modeling method of inverter-based microgrids. The inverters include the dc-side dynamics associated with dc sources. The mathematical model is then used to estimate the domain of asymptotic stability of the 3 bus microgrid. The three bus microgrid system was used as a case study to highlight the design and optimization capability of a large-signal-based approach. The study explores the effect of system component sizing, load transient and generation variations on the asymptotic stability of the microgrid. Essentially, this advancement gives microgrid designers and engineers the ability to manipulate the domain of asymptotic stability depending on performance requirements. Especially important, this research was able to couple the domain of asymptotic stability of the ac microgrid with that of the dc side voltage source. Time domain simulations were used to demonstrate the mathematical nonlinear analysis results.
NASA Astrophysics Data System (ADS)
Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran
2017-08-01
Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7
Modeling the effect of terraces on land degradation in tropical upland agricultural area
NASA Astrophysics Data System (ADS)
Christanto, N.; Shrestha, D. P.; Jetten, V. G.; Setiawan, A.
2012-04-01
Java, the most populated Island in Indonesia, in the pas view decades suffer land degradation do to extreme weather, population pressure and landuse/cover change. The study area, Serayu sub-catchment, as part of Serayu catchment is one of the representative example of Indonesia region facing land use change and land degradation problem. The study attempted to simulate the effect of terraces on land degradation (Soil erosion and landslide hazard) in Serayu sub-catchment using deterministic modeling by means of PCRaster® simulation. The effect of the terraces on tropical upland agricultural area is less studied. This paper will discuss about the effect of terraces on land degradation assessment. Detail Dem is extremely difficult to obtain in developing country like Indonesia. Therefore, an artificial DEM which give an impression of the terraces was built. Topographical maps, Ikonos Image and average of height distribution based on field measurement were used to build the artificial DEM. The result is used in STARWARS model as an input. In combine with Erosion model and PROBSTAB, soil erosion and landslide hazard were quantified. The models were run in two different environment based on the: 1) normal DEM 2.) Artificial DEM (with terraces impression). The result is compared. The result shows that the models run in an artificial DEM give a significant increase on the probability of failure by 20.5%. In the other hand, the erosion rate has fall by 11.32% as compared to the normal DEM. The result of hydrological sensitivity analysis shows that soil depth was the most sensitive parameter. For the slope stability modeling, the most sensitive parameter was slope followed by friction angle and cohesion. The erosion modeling, the model was sensitive to the vegetation cover, soil erodibility followed by BD and KSat. Model validations were applied to assess the accuracy of the models. However, the results of dynamic modeling are ideal for land degradation assessment. Dynamic modeling software such as PC Raster® which is open source and free are reliable alternative to other commercial software
Students' use of atomic and molecular models in learning chemistry
NASA Astrophysics Data System (ADS)
O'Connor, Eileen Ann
1997-09-01
The objective of this study was to investigate the development of introductory college chemistry students' use of atomic and molecular models to explain physical and chemical phenomena. The study was conducted during the first semester of the course at a University and College II. Public institution (Carnegie Commission of Higher Education, 1973). Students' use of models was observed during one-on-one interviews conducted over the course of the semester. The approach to introductory chemistry emphasized models. Students were exposed to over two-hundred and fifty atomic and molecular models during lectures, were assigned text readings that used over a thousand models, and worked interactively with dozens of models on the computer. These models illustrated various features of the spatial organization of valence electrons and nuclei in atoms and molecules. Despite extensive exposure to models in lectures, in textbook, and in computer-based activities, the students in the study based their explanation in large part on a simple Bohr model (electrons arranged in concentric circles around the nuclei)--a model that had not been introduced in the course. Students used visual information from their models to construct their explanation, while overlooking inter-atomic and intra-molecular forces which are not represented explicitly in the models. In addition, students often explained phenomena by adding separate information about the topic without either integrating or logically relating this information into a cohesive explanation. The results of the study demonstrate that despite the extensive use of models in chemistry instruction, students do not necessarily apply them appropriately in explaining chemical and physical phenomena. The results of this study suggest that for the power of models as aids to learning to be more fully realized, chemistry professors must give more attention to the selection, use, integration, and limitations of models in their instruction.
NASA Astrophysics Data System (ADS)
Tommasino, F.
2016-03-01
This review will summarize results obtained in the recent years applying the Local Effect Model (LEM) approach to the study of basic radiobiological aspects, as for instance DNA damage induction and repair, and charged particle track structure. The promising results obtained using different experimental techniques and looking at different biological end points, support the relevance of the LEM approach for the description of radiation effects induced by both low- and high-LET radiation. Furthermore, they suggest that nowadays the appropriate combination of experimental and modelling tools can lead to advances in the understanding of several open issues in the field of radiation biology.
Modeling the spread and control of dengue with limited public health resources.
Abdelrazec, Ahmed; Bélair, Jacques; Shan, Chunhua; Zhu, Huaiping
2016-01-01
A deterministic model for the transmission dynamics of dengue fever is formulated to study, with a nonlinear recovery rate, the impact of available resources of the health system on the spread and control of the disease. Model results indicate the existence of multiple endemic equilibria, as well as coexistence of an endemic equilibrium with a periodic solution. Additionally, our model exhibits the phenomenon of backward bifurcation. The results of this study could be helpful for public health authorities in their planning of a proper resource allocation for the control of dengue transmission. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Henneberg, Olga; Ament, Felix; Grützun, Verena
2018-05-01
Soil moisture amount and distribution control evapotranspiration and thus impact the occurrence of convective precipitation. Many recent model studies demonstrate that changes in initial soil moisture content result in modified convective precipitation. However, to quantify the resulting precipitation changes, the chaotic behavior of the atmospheric system needs to be considered. Slight changes in the simulation setup, such as the chosen model domain, also result in modifications to the simulated precipitation field. This causes an uncertainty due to stochastic variability, which can be large compared to effects caused by soil moisture variations. By shifting the model domain, we estimate the uncertainty of the model results. Our novel uncertainty estimate includes 10 simulations with shifted model boundaries and is compared to the effects on precipitation caused by variations in soil moisture amount and local distribution. With this approach, the influence of soil moisture amount and distribution on convective precipitation is quantified. Deviations in simulated precipitation can only be attributed to soil moisture impacts if the systematic effects of soil moisture modifications are larger than the inherent simulation uncertainty at the convection-resolving scale. We performed seven experiments with modified soil moisture amount or distribution to address the effect of soil moisture on precipitation. Each of the experiments consists of 10 ensemble members using the deep convection-resolving COSMO model with a grid spacing of 2.8 km. Only in experiments with very strong modification in soil moisture do precipitation changes exceed the model spread in amplitude, location or structure. These changes are caused by a 50 % soil moisture increase in either the whole or part of the model domain or by drying the whole model domain. Increasing or decreasing soil moisture both predominantly results in reduced precipitation rates. Replacing the soil moisture with realistic fields from different days has an insignificant influence on precipitation. The findings of this study underline the need for uncertainty estimates in soil moisture studies based on convection-resolving models.
Rai, Amit; Aboumanei, Mohamed H.; Verma, Suraj P.; Kumar, Sachidanand; Raj, Vinit
2017-01-01
Introduction: Ebola Virus Disease (EVD) is caused by Ebola virus, which is often accompanied by fatal hemorrhagic fever upon infection in humans. This virus has caused the majority of deaths in human. There are no proper vaccinations and medications available for EVD. It is pivoting the attraction of scientist to develop the potent vaccination or novel lead to inhibit Ebola virus. Methods & Materials: In the present study, we developed 3D-QSAR and the pharmacophoric model from the previous reported potent compounds for the Ebola virus. Results & Discussion: Results & Discussion: The pharmacophoric model AAAP.116 was generated with better survival value and selectivity. Moreover, the 3D-QSAR model also showed the best r2 value 0.99 using PLS factor. Thereby, we found the higher F value, which demonstrated the statistical significance of both the models. Furthermore, homological modeling and molecular docking study were performed to analyze the affinity of the potent lead. This showed the best binding energy and bond formation with targeted protein. Conclusion: Finally, all the results of this study concluded that 3D-QSAR and Pharmacophore models may be helpful to search potent lead for EVD treatment in future. PMID:29387271
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
NASA Technical Reports Server (NTRS)
Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris
2000-01-01
The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).
Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions
NASA Astrophysics Data System (ADS)
Robertson, Robert V.
Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.
Objective biofidelity rating of a numerical human occupant model in frontal to lateral impact.
de Lange, Ronald; van Rooij, Lex; Mooi, Herman; Wismans, Jac
2005-11-01
Both hardware crash dummies and mathematical human models have been developed largely using the same biomechanical data. For both, biofidelity is a main requirement. Since numerical modeling is not bound to hardware crash dummy design constraints, it allows more detailed modeling of the human and offering biofidelity for multiple directions. In this study the multi-directional biofidelity of the MADYMO human occupant model is assessed, to potentially protect occupants under various impact conditions. To evaluate the model's biofidelity, generally accepted requirements were used for frontal and lateral impact: tests proposed by EEVC and NHTSA and tests specified by ISO TR9790, respectively. A subset of the specified experiments was simulated with the human model. For lateral impact, the results were objectively rated according to the ISO protocol. Since no rating protocol was available for frontal impact, the ISO rating scheme for lateral was used for frontal, as far as possible. As a result, two scores show the overall model biofidelity for frontal and lateral impact, while individual ratings provide insight in the quality on body segment level. The results were compared with the results published for the THOR and WorldSID dummies, showing that the mathematical model exhibits a high level of multi-directional biofidelity. In addition, the performance of the human model in the NBDL 11G oblique test indicates a valid behavior of the model in intermediate directions as well. A new aspect of this study is the objective assessment of the multi-directional biofidelity of the mathematical human model according to accepted requirements. Although hardware dummies may always be used in regulations, it is expected that virtual testing with human models will serve in extrapolating outside the hardware test environment. This study was a first step towards simulating a wider range of impact conditions, such as angled impact and rollover.
Meeuwisse, Marieke; Born, Marise Ph; Severiens, Sabine E
2014-07-01
The present study investigated possible differences in the family-study interface between ethnic minority and ethnic majority students as an explanation for the poorer study results of ethnic minority students compared with those of majority students. We used a model for family-study conflict and facilitation derived from family-work and work-study models. This model held true for the full sample and both non-Western ethnic minority students (N = 342) and ethnic majority students (N = 1314) separately at a major Dutch university. Multivariate analyses of variance revealed that ethnic minority students reported less study effort and earned lower grades compared with ethnic majority students. Regarding the family-study interface, ethnic minority students reported more family-study conflict than did ethnic majority students. No differences were found between the 2 groups in family-study facilitation. Ethnic minority students participated more in family activities and were more involved with their family than ethnic majority students. Levels of experienced family support were equal for both groups of students. Students who received more family social support reported less conflict and more facilitation. This latter finding held more strongly for majority students, resulting in more study effort and higher grades for this group. The results demonstrated the explanatory power of the family-study conflict and facilitation model for both groups.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael
The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less
Limitations to the use of two-dimensional thermal modeling of a nuclear waste repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, B.W.
1979-01-04
Thermal modeling of a nuclear waste repository is basic to most waste management predictive models. It is important that the modeling techniques accurately determine the time-dependent temperature distribution of the waste emplacement media. Recent modeling studies show that the time-dependent temperature distribution can be accurately modeled in the far-field using a 2-dimensional (2-D) planar numerical model; however, the near-field cannot be modeled accurately enough by either 2-D axisymmetric or 2-D planar numerical models for repositories in salt. The accuracy limits of 2-D modeling were defined by comparing results from 3-dimensional (3-D) TRUMP modeling with results from both 2-D axisymmetric andmore » 2-D planar. Both TRUMP and ADINAT were employed as modeling tools. Two-dimensional results from the finite element code, ADINAT were compared with 2-D results from the finite difference code, TRUMP; they showed almost perfect correspondence in the far-field. This result adds substantially to confidence in future use of ADINAT and its companion stress code ADINA for thermal stress analysis. ADINAT was found to be somewhat sensitive to time step and mesh aspect ratio. 13 figures, 4 tables.« less
Application of a Full Reynolds Stress Model to High Lift Flows
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Rumsey, C. L.; Eisfeld, B.
2016-01-01
A recently developed second-moment Reynolds stress model was applied to two challenging high-lift flows: (1) transonic flow over the ONERA M6 wing, and (2) subsonic flow over the DLR-F11 wing-body configuration from the second AIAA High Lift Prediction Workshop. In this study, the Reynolds stress model results were contrasted with those obtained from one- and two{equation turbulence models, and were found to be competitive in terms of the prediction of shock location and separation. For an ONERA M6 case, results from multiple codes, grids, and models were compared, with the Reynolds stress model tending to yield a slightly smaller shock-induced separation bubble near the wing tip than the simpler models, but all models were fairly close to the limited experimental surface pressure data. For a series of high-lift DLR{F11 cases, the range of results was more limited, but there was indication that the Reynolds stress model yielded less-separated results than the one-equation model near maximum lift. These less-separated results were similar to results from the one-equation model with a quadratic constitutive relation. Additional computations need to be performed before a more definitive assessment of the Reynolds stress model can be made.
Validating and improving a zero-dimensional stack voltage model of the Vanadium Redox Flow Battery
NASA Astrophysics Data System (ADS)
König, S.; Suriyah, M. R.; Leibfried, T.
2018-02-01
Simple, computationally efficient battery models can contribute significantly to the development of flow batteries. However, validation studies for these models on an industrial-scale stack level are rarely published. We first extensively present a simple stack voltage model for the Vanadium Redox Flow Battery. For modeling the concentration overpotential, we derive mass transfer coefficients from experimental results presented in the 1990s. The calculated mass transfer coefficient of the positive half-cell is 63% larger than of the negative half-cell, which is not considered in models published to date. Further, we advance the concentration overpotential model by introducing an apparent electrochemically active electrode surface which differs from the geometric electrode area. We use the apparent surface as fitting parameter for adapting the model to experimental results of a flow battery manufacturer. For adapting the model, we propose a method for determining the agreement between model and reality quantitatively. To protect the manufacturer's intellectual property, we introduce a normalization method for presenting the results. For the studied stack, the apparent electrochemically active surface of the electrode is 41% larger than its geometrical area. Hence, the current density in the diffusion layer is 29% smaller than previously reported for a zero-dimensional model.
Reconnection in the Martian Magnetotail: Hall-MHD With Embedded Particle-in-Cell Simulations
NASA Astrophysics Data System (ADS)
Ma, Yingjuan; Russell, Christopher T.; Toth, Gabor; Chen, Yuxi; Nagy, Andrew F.; Harada, Yuki; McFadden, James; Halekas, Jasper S.; Lillis, Rob; Connerney, John E. P.; Espley, Jared; DiBraccio, Gina A.; Markidis, Stefano; Peng, Ivy Bo; Fang, Xiaohua; Jakosky, Bruce M.
2018-05-01
Mars Atmosphere and Volatile EvolutioN (MAVEN) mission observations show clear evidence of the occurrence of the magnetic reconnection process in the Martian plasma tail. In this study, we use sophisticated numerical models to help us understand the effects of magnetic reconnection in the plasma tail. The numerical models used in this study are (a) a multispecies global Hall-magnetohydrodynamic (HMHD) model and (b) a global HMHD model two-way coupled to an embedded fully kinetic particle-in-cell code. Comparison with MAVEN observations clearly shows that the general interaction pattern is well reproduced by the global HMHD model. The coupled model takes advantage of both the efficiency of the MHD model and the ability to incorporate kinetic processes of the particle-in-cell model, making it feasible to conduct kinetic simulations for Mars under realistic solar wind conditions for the first time. Results from the coupled model show that the Martian magnetotail is highly dynamic due to magnetic reconnection, and the resulting Mars-ward plasma flow velocities are significantly higher for the lighter ion fluid, which are quantitatively consistent with MAVEN observations. The HMHD with Embedded Particle-in-Cell model predicts that the ion loss rates are more variable but with similar mean values as compared with HMHD model results.
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
Yavuzkurt, S; Iyer, G R
2001-05-01
A review of the past work done on free stream turbulence (FST) as applied to gas turbine heat transfer and its implications for future studies are presented. It is a comprehensive approach to the results of many individual studies in order to derive the general conclusions that could be inferred from all rather than discussing the results of each individual study. Three experimental and four modeling studies are reviewed. The first study was on prediction of heat transfer for film cooled gas turbine blades. An injection model was devised and used along with a 2-D low Reynolds number k-epsilon model of turbulence for the calculations. Reasonable predictions of heat transfer coefficients were obtained for turbulence intensity levels up to 7%. Following this modeling study a series of experimental studies were undertaken. The objective of these studies was to gain a fundamental understanding of mechanisms through which FST augments the surface heat transfer. Experiments were carried out in the boundary layer and in the free stream downstream of a gas turbine combustor simulator, which produced initial FST levels of 25.7% and large length scales (About 5-10 cm for a boundary layer 4-5 cm thick). This result showed that one possible mechanism through which FST caused an increase in heat transfer is by increasing the number of ejection events. In a number of modeling studies several well-known k-epsilon models were compared for their predictive capability of heat transfer and skin friction coefficients under moderate and high FST. Two data sets, one with moderate levels of FST (about 7%) and one with high levels of FST (about 25%) were used for this purpose. Although the models did fine in their predictions of cases with no FST (baseline cases) they failed one by one as FST levels were increased. Under high FST (25.7% initial intensity) predictions of Stanton number were between 35-100% in error compared to the measured values. Later a new additional production term indicating the interaction between the turbulent kinetic energy (TKE) and mean velocity gradients was introduced into the TKE equation. The predicted results of skin friction coefficient and Stanton number were excellent both in moderate and high FST cases. In fact these model also gave good predictions of TKE profiles whereas earlier unmodified models did not predict the correct TKE profiles even under moderate turbulence intensities. Although this new production term seems to achieve the purpose, it is the authors' belief that it is diffusion term of the TKE equation, which needs to be modified in order to fit the physical events in high FST boundary layer flows. The results of these studies are currently being used to come up with new diffusion model for the TKE equation.
A curious relationship between Potts glass models
NASA Astrophysics Data System (ADS)
Yamaguchi, Chiaki
2015-08-01
A Potts glass model proposed by Nishimori and Stephen [H. Nishimori, M.J. Stephen, Phys. Rev. B 27, 5644 (1983)] is analyzed by means of the replica mean field theory. This model is a discrete model, has a gauge symmetry, and is called the Potts gauge glass model. By comparing the present results with the results of the conventional Potts glass model, we find the coincidences and differences between the models. We find a coincidence that the property for the Potts glass phase in this model is coincident with that in the conventional model at the mean field level. We find a difference that, unlike in the case of the conventional p-state Potts glass model, this system for large p does not become ferromagnetic at low temperature under a concentration of ferromagnetic interaction. The present results support the act of numerically investigating the present model for study of the Potts glass phase in finite dimensions.
Development and validation of a two-dimensional fast-response flood estimation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R; Mcpherson, Timothy N; Burian, Steven J
2009-01-01
A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Batzel, J J; Tran, H T
2000-07-01
A number of mathematical models of the human respiratory control system have been developed since 1940 to study a wide range of features of this complex system. Among them, periodic breathing (including Cheyne-Stokes respiration and apneustic breathing) is a collection of regular but involuntary breathing patterns that have important medical implications. The hypothesis that periodic breathing is the result of delay in the feedback signals to the respiratory control system has been studied since the work of Grodins et al. in the early 1950's [12]. The purpose of this paper is to study the stability characteristics of a feedback control system of five differential equations with delays in both the state and control variables presented by Khoo et al. [17] in 1991 for modeling human respiration. The paper is divided in two parts. Part I studies a simplified mathematical model of two nonlinear state equations modeling arterial partial pressures of O2 and CO2 and a peripheral controller. Analysis was done on this model to illuminate the effect of delay on the stability. It shows that delay dependent stability is affected by the controller gain, compartmental volumes and the manner in which changes in the ventilation rate is produced (i.e., by deeper breathing or faster breathing). In addition, numerical simulations were performed to validate analytical results. Part II extends the model in Part I to include both peripheral and central controllers. This, however, necessitates the introduction of a third state equation modeling CO2 levels in the brain. In addition to analytical studies on delay dependent stability, it shows that the decreased cardiac output (and hence increased delay) resulting from the congestive heart condition can induce instability at certain control gain levels. These analytical results were also confirmed by numerical simulations.
Batzel, J J; Tran, H T
2000-07-01
A number of mathematical models of the human respiratory control system have been developed since 1940 to study a wide range of features of this complex system. Among them, periodic breathing (including Cheyne-Stokes respiration and apneustic breathing) is a collection of regular but involuntary breathing patterns that have important medical implications. The hypothesis that periodic breathing is the result of delay in the feedback signals to the respiratory control system has been studied since the work of Grodins et al. in the early 1950's [1]. The purpose of this paper is to study the stability characteristics of a feedback control system of five differential equations with delays in both the state and control variables presented by Khoo et al. [4] in 1991 for modeling human respiration. The paper is divided in two parts. Part I studies a simplified mathematical model of two nonlinear state equations modeling arterial partial pressures of O2 and CO2 and a peripheral controller. Analysis was done on this model to illuminate the effect of delay on the stability. It shows that delay dependent stability is affected by the controller gain, compartmental volumes and the manner in which changes in the ventilation rate is produced (i.e., by deeper breathing or faster breathing). In addition, numerical simulations were performed to validate analytical results. Part II extends the model in Part I to include both peripheral and central controllers. This, however, necessitates the introduction of a third state equation modeling CO2 levels in the brain. In addition to analytical studies on delay dependent stability, it shows that the decreased cardiac output (and hence increased delay) resulting from the congestive heart condition can induce instability at certain control gain levels. These analytical results were also confirmed by numerical simulations.
Hazelden's model of treatment and its outcome.
Stinchfield, R; Owen, P
1998-01-01
Although the Minnesota Model of treatment for alcohol and drug addiction is a common treatment approach, there are few published reports of its effectiveness. This study describes the Minnesota Model treatment approach as practiced at Hazelden, a private residential alcohol and drug abuse treatment center located in Center City, Minnesota (a founding program of the Minnesota Model) and presents recent outcome results from this program. This study includes 1,083 male and female clients admitted to Hazelden for treatment of a psychoactive substance-use disorder between 1989 and 1991. The outcome study is a one group pretest/posttest design. Data collection occurred at admission to treatment and at 1-month, 6-month, and 12-month posttreatment. At 1-year follow-up, 53% reported that they remained abstinent during the year following treatment and an additional 35% had reduced their alcohol and drug use. These results are similar to those reported by other private treatment programs. The Minnesota Model has consistently yielded satisfactory outcome results, and future research needs to focus on the therapeutic process of this common treatment approach.
Pneumatic tyres interacting with deformable terrains
NASA Astrophysics Data System (ADS)
Bekakos, C. A.; Papazafeiropoulos, G.; O'Boy, D. J.; Prins, J.
2016-09-01
In this study, a numerical model of a deformable tyre interacting with a deformable road has been developed with the use of the finite element code ABAQUS (v. 6.13). Two tyre models with different widths, not necessarily identical to any real industry tyres, have been created purely for research use. The behaviour of these tyres under various vertical loads and different inflation pressures is studied, initially in contact with a rigid surface and then with a deformable terrain. After ensuring that the tyre model gives realistic results in terms of the interaction with a rigid surface, the rolling process of the tyre on a deformable road was studied. The effects of friction coefficient, inflation pressure, rebar orientation and vertical load on the overall performance are reported. Regarding the modelling procedure, a sequence of models were analysed, using the coupling implicit - explicit method. The numerical results reveal that not only there is significant dependence of the final tyre response on the various initial driving parameters, but also special conditions emerge, where the desired response of the tyre results from specific optimum combination of these parameters.
Consensus time and conformity in the adaptive voter model
NASA Astrophysics Data System (ADS)
Rogers, Tim; Gross, Thilo
2013-09-01
The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.
NASA Astrophysics Data System (ADS)
Liu, Xiaoyu; Mason, Mark A.; Guo, Zhishi; Krebs, Kenneth A.; Roache, Nancy F.
2015-12-01
This paper describes the measurement and model evaluation of formaldehyde source emissions from composite and solid wood furniture in a full-scale chamber at different ventilation rates for up to 4000 h using ASTM D 6670-01 (2007). Tests were performed on four types of furniture constructed of different materials and from different manufacturers. The data were used to evaluate two empirical emission models, i.e., a first-order and power-law decay model. The experimental results showed that some furniture tested in this study, made only of solid wood and with less surface area, had low formaldehyde source emissions. The effect of ventilation rate on formaldehyde emissions was also examined. Model simulation results indicated that the power-law decay model showed better agreement than the first-order decay model for the data collected from the tests, especially for long-term emissions. This research was limited to a laboratory study with only four types of furniture products tested. It was not intended to comprehensively test or compare the large number of furniture products available in the market place. Therefore, care should be taken when applying the test results to real-world scenarios. Also, it was beyond the scope of this study to link the emissions to human exposure and potential health risks.
NASA Astrophysics Data System (ADS)
Ismail, Edy; Samsudi, Widjanarko, Dwi; Joyce, Peter; Stearns, Roman
2018-03-01
This model integrates project base learning by creating a product based on environmental needs. The Produktif Orientasi Lapangan 4 Tahap (POL4T) combines technical skills and entrepreneurial elements together in the learning process. This study is to implement the result of technopreneurship learning model development which is environment-oriented by combining technology and entrepreneurship components on Machining Skill Program. This study applies research and development design by optimizing experimental subject. Data were obtained from questionnaires, learning material validation, interpersonal, intrapersonal observation forms, skills, product, teachers and students' responses, and cognitive tasks. Expert validation and t-test calculation are applied to see how effective POL4T learning model. The result of the study is in the form of 4 steps learning model to enhance interpersonal and intrapersonal attitudes, develop practical products which orient to society and appropriate technology so that the products can have high selling value. The model is effective based on the students' post test result, which is better than the pre-test. The product obtained from POL4T model is proven to be better than the productive learning. POL4T model is recommended to be implemented for XI grade students. This is can develop entrepreneurial attitudes that are environment oriented, community needs and technical competencies students.
A comparative study on GM (1,1) and FRMGM (1,1) model in forecasting FBM KLCI
NASA Astrophysics Data System (ADS)
Ying, Sah Pei; Zakaria, Syerrina; Mutalib, Sharifah Sakinah Syed Abd
2017-11-01
FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBM KLCI) is a group of indexes combined in a standardized way and is used to measure the Malaysia overall market across the time. Although composite index can give ideas about stock market to investors, it is hard to predict accurately because it is volatile and it is necessary to identify a best model to forecast FBM KLCI. The objective of this study is to determine the most accurate forecasting model between GM (1,1) model and Fourier Residual Modification GM (1,1) (FRMGM (1,1)) model to forecast FBM KLCI. In this study, the actual daily closing data of FBM KLCI was collected from January 1, 2016 to March 15, 2016. GM (1,1) model and FRMGM (1,1) model were used to build the grey model and to test forecasting power of both models. Mean Absolute Percentage Error (MAPE) was used as a measure to determine the best model. Forecasted value by FRMGM (1,1) model do not differ much than the actual value compare to GM (1,1) model for in-sample and out-sample data. Results from MAPE also show that FRMGM (1,1) model is lower than GM (1,1) model for in-sample and out-sample data. These results shown that FRMGM (1,1) model is better than GM (1,1) model to forecast FBM KLCI.
The Dynamical Behaviors for a Class of Immunogenic Tumor Model with Delay
Muthoni, Mutei Damaris; Pang, Jianhua
2017-01-01
This paper aims at studying the model proposed by Kuznetsov and Taylor in 1994. Inspired by Mayer et al., time delay is introduced in the general model. The dynamic behaviors of this model are studied, which include the existence and stability of the equilibria and Hopf bifurcation of the model with discrete delays. The properties of the bifurcated periodic solutions are studied by using the normal form on the center manifold. Numerical examples and simulations are given to illustrate the bifurcation analysis and the obtained results. PMID:29312457
Pothrat, Claude; Authier, Guillaume; Viehweger, Elke; Berton, Eric; Rao, Guillaume
2015-06-01
Biomechanical models representing the foot as a single rigid segment are commonly used in clinical or sport evaluations. However, neglecting internal foot movements could lead to significant inaccuracies on ankle joint kinematics. The present study proposed an assessment of 3D ankle kinematic outputs using two distinct biomechanical models and their application in the clinical flat foot case. Results of the Plug in Gait (one segment foot model) and the Oxford Foot Model (multisegment foot model) were compared for normal children (9 participants) and flat feet children (9 participants). Repeated measures of Analysis of Variance have been performed to assess the Foot model and Group effects on ankle joint kinematics. Significant differences were observed between the two models for each group all along the gait cycle. In particular for the flat feet group, opposite results between the Oxford Foot Model and the Plug in Gait were revealed at heelstrike, with the Plug in Gait showing a 4.7° ankle dorsal flexion and 2.7° varus where the Oxford Foot Model showed a 4.8° ankle plantar flexion and 1.6° valgus. Ankle joint kinematics of the flat feet group was more affected by foot modeling than normal group. Foot modeling appeared to have a strong influence on resulting ankle kinematics. Moreover, our findings showed that this influence could vary depending on the population. Studies involving ankle joint kinematic assessment should take foot modeling with caution. Copyright © 2015 Elsevier Ltd. All rights reserved.
An integrated geophysical study of the lithospheric structure beneath Libya
NASA Astrophysics Data System (ADS)
Brown, Wesley A.
This doctoral dissertation constitutes an integrated geophysical investigation of the lithospheric structure in the region of Libya. It is separated into three sections, each of which will be submitted to different scientific journals for publication. In the first part of the study, I utilized a seamless mosaicking approach based on the commercial Environment for Visualizing Images (ENVI) software package to create mosaics of two geologically interesting portions of Libya. In this study I present a step by step method of mosaicking Landsat 4 satellite images. Firstly, I performed histogram matching to give images the same color scale, then I used a cutline feathering technique to blend suture areas and finally I overlaid the images to form the two mosaics. The resulting mosaics were then combined with structural features and the seismicity map of the area. The resulting mosaics were proven to be useful in identifying recently active faults and shows great potential for verification of other faults and in natural hazard assessment. For the second portion of my research, I made use of over 6,000 free air corrected gravity data in conjunction with other geological and geophysical data to develop a 3D density model for northern Libya. I used a gravity modeling program (SURFGRAV) to develop the 3D density model by manipulating it to accurately predict large areas of Free Air anomaly shown in the data. The residual gravity anomaly values were calculated by subtracting predicted Free Air anomaly from the observed Free Air anomaly. The results were satisfactory for uplifted areas of Libya while there were significant mismatches in basin areas. The density model was iterated and used as a starting model for the final portion of the study. In the last part of this research, I used the Nafe-Drake relationship along with other geological data to convert the 3D density model to a 3D velocity model (LIBYA3D) for the region. Two earthquakes having source receiver paths sampling much of the modeled area were used to perform 1D and 1.5D validation tests, and the results were compared to those from previous studies. The results showed that the new 3D velocity model is valid and superior to the global model. However, until there is sufficient earthquake data acquired, and we are able to perform 2D and 3D modeling we may not be able to see the true improvement of LIBYA3D as compared to the other regional models.
Delamination Behavior of L-Shaped Laminated Composites
NASA Astrophysics Data System (ADS)
Geleta, Tsinuel N.; Woo, Kyeongsik; Lee, Bongho
2018-05-01
We studied the delamination behavior of L-shaped laminated composites numerically and experimentally. In finite-element modeling, cohesive zone modeling was used to simulate the delamination of plies. Cohesive elements were inserted between bulk elements at each interlayer to represent the occurrence of multiple delaminations. The laminated composite models were subjected to several types of loading inducing opening and shearing types of delamination. Numerical results were compared to those in the literature and of experiments conducted in this study. The results were carefully examined to investigate diverse delamination initiation and propagation behaviors. The effect of varying presence and location of pre-crack was also studied.
Decision Tree Approach for Soil Liquefaction Assessment
Gandomi, Amir H.; Fridline, Mark M.; Roke, David A.
2013-01-01
In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view. PMID:24489498
Decision tree approach for soil liquefaction assessment.
Gandomi, Amir H; Fridline, Mark M; Roke, David A
2013-01-01
In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view.
Investigation of supersonic jet plumes using an improved two-equation turbulence model
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Abdol-Hamid, Khaled S.
1994-01-01
Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.
Determining factors influencing survival of breast cancer by fuzzy logistic regression model.
Nikbakht, Roya; Bahrampour, Abbas
2017-01-01
Fuzzy logistic regression model can be used for determining influential factors of disease. This study explores the important factors of actual predictive survival factors of breast cancer's patients. We used breast cancer data which collected by cancer registry of Kerman University of Medical Sciences during the period of 2000-2007. The variables such as morphology, grade, age, and treatments (surgery, radiotherapy, and chemotherapy) were applied in the fuzzy logistic regression model. Performance of model was determined in terms of mean degree of membership (MDM). The study results showed that almost 41% of patients were in neoplasm and malignant group and more than two-third of them were still alive after 5-year follow-up. Based on the fuzzy logistic model, the most important factors influencing survival were chemotherapy, morphology, and radiotherapy, respectively. Furthermore, the MDM criteria show that the fuzzy logistic regression have a good fit on the data (MDM = 0.86). Fuzzy logistic regression model showed that chemotherapy is more important than radiotherapy in survival of patients with breast cancer. In addition, another ability of this model is calculating possibilistic odds of survival in cancer patients. The results of this study can be applied in clinical research. Furthermore, there are few studies which applied the fuzzy logistic models. Furthermore, we recommend using this model in various research areas.
Vesicular stomatitis forecasting based on Google Trends
Lu, Yi; Zhou, GuangYa; Chen, Qin
2018-01-01
Background Vesicular stomatitis (VS) is an important viral disease of livestock. The main feature of VS is irregular blisters that occur on the lips, tongue, oral mucosa, hoof crown and nipple. Humans can also be infected with vesicular stomatitis and develop meningitis. This study analyses 2014 American VS outbreaks in order to accurately predict vesicular stomatitis outbreak trends. Methods American VS outbreaks data were collected from OIE. The data for VS keywords were obtained by inputting 24 disease-related keywords into Google Trends. After calculating the Pearson and Spearman correlation coefficients, it was found that there was a relationship between outbreaks and keywords derived from Google Trends. Finally, the predicted model was constructed based on qualitative classification and quantitative regression. Results For the regression model, the Pearson correlation coefficients between the predicted outbreaks and actual outbreaks are 0.953 and 0.948, respectively. For the qualitative classification model, we constructed five classification predictive models and chose the best classification predictive model as the result. The results showed, SN (sensitivity), SP (specificity) and ACC (prediction accuracy) values of the best classification predictive model are 78.52%,72.5% and 77.14%, respectively. Conclusion This study applied Google search data to construct a qualitative classification model and a quantitative regression model. The results show that the method is effective and that these two models obtain more accurate forecast. PMID:29385198
Petri Nets as Modeling Tool for Emergent Agents
NASA Technical Reports Server (NTRS)
Bergman, Marto
2004-01-01
Emergent agents, those agents whose local interactions can cause unexpected global results, require a method of modeling that is both dynamic and structured Petri Nets, a modeling tool developed for dynamic discrete event system of mainly functional agents, provide this, and have the benefit of being an established tool. We present here the details of the modeling method here and discuss how to implement its use for modeling agent-based systems. Petri Nets have been used extensively in the modeling of functional agents, those agents who have defined purposes and whose actions should result in a know outcome. However, emergent agents, those agents who have a defined structure but whose interaction causes outcomes that are unpredictable, have not yet found a modeling style that suits them. A problem with formally modeling emergent agents that any formal modeling style usually expects to show the results of a problem and the results of problems studied using emergent agents are not apparent from the initial construction. However, the study of emergent agents still requires a method to analyze the agents themselves, and have sensible conversation about the differences and similarities between types of emergent agents. We attempt to correct this problem by applying Petri Nets to the characterization of emergent agents. In doing so, the emergent properties of these agents can be highlighted, and conversation about the nature and compatibility of the differing methods of agent creation can begin.
NASA Astrophysics Data System (ADS)
Souleymane, S.
2015-12-01
West Africa has been highlighted as a hot spot of land surface-atmosphere interactions. This study analyses the outputs of the project Land-Use and Climate, IDentification of Robust Impacts (LUCID) over West Africa. LUCID used seven atmosphere-land models with a common experimental design to explore the impacts of Land Use induced Land Cover Change (LULCC) that are robust and consistent across the climate models. Focusing the analysis on Sahel and Guinea, this study shows that, even though the seven climate models use the same atmospheric and land cover forcing, there are significant differences of West African Monsoon variability across the climate models. The magnitude of that variability differs significantly from model to model resulting two major "features": (1) atmosphere dynamics models; (2) how the land-surface functioning is parameterized in the Land surface Model, in particular regarding the evapotranspiration partitioning within the different land-cover types, as well as the role of leaf area index (LAI) in the flux calculations and how strongly the surface is coupled to the atmosphere. The major role that the models'sensitivity to land-cover perturbations plays in the resulting climate impacts of LULCC has been analysed in this study. The climate models show, however, significant differences in the magnitude and the seasonal partitioning of the temperature change. The LULCC induced cooling is directed by decreases in net shortwave radiation that reduced the available energy (QA) (related to changes in land-cover properties other than albedo, such as LAI and surface roughness), which decreases during most part of the year. The biophysical impacts of LULCC were compared to the impact of elevated greenhouse gases resulting changes in sea surface temperatures and sea ice extent (CO2SST). The results show that the surface cooling (related a decrease in QA) induced by the biophysical effects of LULCC are insignificant compared to surface warming (related an increase in QA), which is induced by the regional significance effect of CO2SST due to a small LULCC imposed. In contrast, the decrease of surface water balance resulting from LULCC effect is a similar sign to those resulting from CO2SST but the signal resulting of the biophysical effects of LULCC is stronger than the regional CO2SST impact.
Tseng, Zhijie Jack; Mcnitt-Gray, Jill L.; Flashner, Henryk; Wang, Xiaoming; Enciso, Reyes
2011-01-01
Finite Element Analysis (FEA) is a powerful tool gaining use in studies of biological form and function. This method is particularly conducive to studies of extinct and fossilized organisms, as models can be assigned properties that approximate living tissues. In disciplines where model validation is difficult or impossible, the choice of model parameters and their effects on the results become increasingly important, especially in comparing outputs to infer function. To evaluate the extent to which performance measures are affected by initial model input, we tested the sensitivity of bite force, strain energy, and stress to changes in seven parameters that are required in testing craniodental function with FEA. Simulations were performed on FE models of a Gray Wolf (Canis lupus) mandible. Results showed that unilateral bite force outputs are least affected by the relative ratios of the balancing and working muscles, but only ratios above 0.5 provided balancing-working side joint reaction force relationships that are consistent with experimental data. The constraints modeled at the bite point had the greatest effect on bite force output, but the most appropriate constraint may depend on the study question. Strain energy is least affected by variation in bite point constraint, but larger variations in strain energy values are observed in models with different number of tetrahedral elements, masticatory muscle ratios and muscle subgroups present, and number of material properties. These findings indicate that performance measures are differentially affected by variation in initial model parameters. In the absence of validated input values, FE models can nevertheless provide robust comparisons if these parameters are standardized within a given study to minimize variation that arise during the model-building process. Sensitivity tests incorporated into the study design not only aid in the interpretation of simulation results, but can also provide additional insights on form and function. PMID:21559475
Modelling the degree of porosity of the ceramic surface intended for implants.
Stach, Sebastian; Kędzia, Olga; Garczyk, Żaneta; Wróbel, Zygmunt
2018-05-18
The main goal of the study was to develop a model of the degree of surface porosity of a biomaterial intended for implants. The model was implemented using MATLAB. A computer simulation was carried out based on the developed model, which resulted in a two-dimensional image of the modelled surface. Then, an algorithm for computerised image analysis of the surface of the actual oxide bioceramic layer was developed, which enabled determining its degree of porosity. In order to obtain the confocal micrographs of a few areas of the biomaterial, measurements were performed using the LEXT OLS4000 confocal laser microscope. The image analysis was carried out using MountainsMap Premium and SPIP. The obtained results allowed determining the input parameters of the program, on the basis of which porous biomaterial surface images were generated. The last part of the study involved verification of the developed model. The modelling method was tested by comparing the obtained results with the experimental data obtained from the analysis of surface images of the test material.
NASA Astrophysics Data System (ADS)
Chang, Hsin-Yi; Chang, Hsiang-Chi
2013-08-01
In this study, we developed online critiquing activities using an open-source computer learning environment. We investigated how well the activities scaffolded students to critique molecular models of chemical reactions made by scientists, peers, and a fictitious peer, and whether the activities enhanced the students' understanding of science models and chemical reactions. The activities were implemented in an eighth-grade class with 28 students in a public junior high school in southern Taiwan. The study employed mixed research methods. Data collected included pre- and post-instructional assessments, post-instructional interviews, and students' electronic written responses and oral discussions during the critiquing activities. The results indicated that these activities guided the students to produce overall quality critiques. Also, the students developed a more sophisticated understanding of chemical reactions and scientific models as a result of the intervention. Design considerations for effective model critiquing activities are discussed based on observational results, including the use of peer-generated artefacts for critiquing to promote motivation and collaboration, coupled with critiques of scientific models to enhance students' epistemological understanding of model purpose and communication.
Modeling coupled aerodynamics and vocal fold dynamics using immersed boundary methods.
Duncan, Comer; Zhai, Guangnian; Scherer, Ronald
2006-11-01
The penalty immersed boundary (PIB) method, originally introduced by Peskin (1972) to model the function of the mammalian heart, is tested as a fluid-structure interaction model of the closely coupled dynamics of the vocal folds and aerodynamics in phonation. Two-dimensional vocal folds are simulated with material properties chosen to result in self-oscillation and volume flows in physiological frequency ranges. Properties of the glottal flow field, including vorticity, are studied in conjunction with the dynamic vocal fold motion. The results of using the PIB method to model self-oscillating vocal folds for the case of 8 cm H20 as the transglottal pressure gradient are described. The volume flow at 8 cm H20, the transglottal pressure, and vortex dynamics associated with the self-oscillating model are shown. Volume flow is also given for 2, 4, and 12 cm H2O, illustrating the robustness of the model to a range of transglottal pressures. The results indicate that the PIB method applied to modeling phonation has good potential for the study of the interdependence of aerodynamics and vocal fold motion.
Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia
2012-04-01
This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) < 0.001) in bilateral HG and STG. Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p < 0.05). Subsequent model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with the activity in STG influencing the STG-HG connectivity non-linearly.
Corpet, Denis E; Pierre, Fabrice
2003-05-01
The Apc(Min/+) mouse model and the azoxymethane (AOM) rat model are the main animal models used to study the effect of dietary agents on colorectal cancer. We reviewed recently the potency of chemopreventive agents in the AOM rat model (D. E. Corpet and S. Tache, Nutr. Cancer, 43: 1-21, 2002). Here we add the results of a systematic review of the effect of dietary and chemopreventive agents on the tumor yield in Min mice. The review is based on the results of 179 studies from 71 articles and is displayed also on the internet http://corpet.net/min.(2) We compared the efficacy of agents in the Min mouse model and the AOM rat model, and found that they were correlated (r = 0.66; P < 0.001), although some agents that afford strong protection in the AOM rat and the Min mouse small bowel increase the tumor yield in the large bowel of mutant mice. The agents included piroxicam, sulindac, celecoxib, difluoromethylornithine, and polyethylene glycol. The reason for this discrepancy is not known. We also compare the results of rodent studies with those of clinical intervention studies of polyp recurrence. We found that the effect of most of the agents tested was consistent across the animal and clinical models. Our point is thus: rodent models can provide guidance in the selection of prevention approaches to human colon cancer, in particular they suggest that polyethylene glycol, hesperidin, protease inhibitor, sphingomyelin, physical exercise, epidermal growth factor receptor kinase inhibitor, (+)-catechin, resveratrol, fish oil, curcumin, caffeate, and thiosulfonate are likely important preventive agents.
Generalisability in economic evaluation studies in healthcare: a review and case studies.
Sculpher, M J; Pang, F S; Manca, A; Drummond, M F; Golder, S; Urdahl, H; Davies, L M; Eastwood, A
2004-12-01
To review, and to develop further, the methods used to assess and to increase the generalisability of economic evaluation studies. Electronic databases. Methodological studies relating to economic evaluation in healthcare were searched. This included electronic searches of a range of databases, including PREMEDLINE, MEDLINE, EMBASE and EconLit, and manual searches of key journals. The case studies of a decision analytic model involved highlighting specific features of previously published economic studies related to generalisability and location-related variability. The case-study involving the secondary analysis of cost-effectiveness analyses was based on the secondary analysis of three economic studies using data from randomised trials. The factor most frequently cited as generating variability in economic results between locations was the unit costs associated with particular resources. In the context of studies based on the analysis of patient-level data, regression analysis has been advocated as a means of looking at variability in economic results across locations. These methods have generally accepted that some components of resource use and outcomes are exchangeable across locations. Recent studies have also explored, in cost-effectiveness analysis, the use of tests of heterogeneity similar to those used in clinical evaluation in trials. The decision analytic model has been the main means by which cost-effectiveness has been adapted from trial to non-trial locations. Most models have focused on changes to the cost side of the analysis, but it is clear that the effectiveness side may also need to be adapted between locations. There have been weaknesses in some aspects of the reporting in applied cost-effectiveness studies. These may limit decision-makers' ability to judge the relevance of a study to their specific situations. The case study demonstrated the potential value of multilevel modelling (MLM). Where clustering exists by location (e.g. centre or country), MLM can facilitate correct estimates of the uncertainty in cost-effectiveness results, and also a means of estimating location-specific cost-effectiveness. The review of applied economic studies based on decision analytic models showed that few studies were explicit about their target decision-maker(s)/jurisdictions. The studies in the review generally made more effort to ensure that their cost inputs were specific to their target jurisdiction than their effectiveness parameters. Standard sensitivity analysis was the main way of dealing with uncertainty in the models, although few studies looked explicitly at variability between locations. The modelling case study illustrated how effectiveness and cost data can be made location-specific. In particular, on the effectiveness side, the example showed the separation of location-specific baseline events and pooled estimates of relative treatment effect, where the latter are assumed exchangeable across locations. A large number of factors are mentioned in the literature that might be expected to generate variation in the cost-effectiveness of healthcare interventions across locations. Several papers have demonstrated differences in the volume and cost of resource use between locations, but few studies have looked at variability in outcomes. In applied trial-based cost-effectiveness studies, few studies provide sufficient evidence for decision-makers to establish the relevance or to adjust the results of the study to their location of interest. Very few studies utilised statistical methods formally to assess the variability in results between locations. In applied economic studies based on decision models, most studies either stated their target decision-maker/jurisdiction or provided sufficient information from which this could be inferred. There was a greater tendency to ensure that cost inputs were specific to the target jurisdiction than clinical parameters. Methods to assess generalisability and variability in economic evaluation studies have been discussed extensively in the literature relating to both trial-based and modelling studies. Regression-based methods are likely to offer a systematic approach to quantifying variability in patient-level data. In particular, MLM has the potential to facilitate estimates of cost-effectiveness, which both reflect the variation in costs and outcomes between locations and also enable the consistency of cost-effectiveness estimates between locations to be assessed directly. Decision analytic models will retain an important role in adapting the results of cost-effectiveness studies between locations. Recommendations for further research include: the development of methods of evidence synthesis which model the exchangeability of data across locations and allow for the additional uncertainty in this process; assessment of alternative approaches to specifying multilevel models to the analysis of cost-effectiveness data alongside multilocation randomised trials; identification of a range of appropriate covariates relating to locations (e.g. hospitals) in multilevel models; and further assessment of the role of econometric methods (e.g. selection models) for cost-effectiveness analysis alongside observational datasets, and to increase the generalisability of randomised trials.
Pilot-model analysis and simulation study of effect of control task desired control response
NASA Technical Reports Server (NTRS)
Adams, J. J.; Gera, J.; Jaudon, J. B.
1978-01-01
A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.
Demonstration of reduced-order urban scale building energy models
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...
2017-09-08
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Demonstration of reduced-order urban scale building energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Lung and stomach cancer associations with groundwater radon in North Carolina, USA
Messier, Kyle P; Serre, Marc L
2017-01-01
Abstract Background: The risk of indoor air radon for lung cancer is well studied, but the risks of groundwater radon for both lung and stomach cancer are much less studied, and with mixed results. Methods: Geomasked and geocoded stomach and lung cancer cases in North Carolina from 1999 to 2009 were obtained from the North Carolina Central Cancer Registry. Models for the association with groundwater radon and multiple confounders were implemented at two scales: (i) an ecological model estimating cancer incidence rates at the census tract level; and (ii) a case-only logistic model estimating the odds that individual cancer cases are members of local cancer clusters. Results: For the lung cancer incidence rate model, groundwater radon is associated with an incidence rate ratio of 1.03 [95% confidence interval (CI) = 1.01, 1.06] for every 100 Bq/l increase in census tract averaged concentration. For the cluster membership models, groundwater radon exposure results in an odds ratio for lung cancer of 1.13 (95% CI = 1.04, 1.23) and for stomach cancer of 1.24 (95% CI = 1.03, 1.49), which means groundwater radon, after controlling for multiple confounders and spatial auto-correlation, increases the odds that lung and stomach cancer cases are members of their respective cancer clusters. Conclusion: Our study provides epidemiological evidence of a positive association between groundwater radon exposure and lung cancer incidence rates. The cluster membership model results find groundwater radon increases the odds that both lung and stomach cancer cases occur within their respective cancer clusters. The results corroborate previous biokinetic and mortality studies that groundwater radon is associated with increased risk for lung and stomach cancer. PMID:27639278
The Social Cognitive Model of Job Satisfaction among Teachers: Testing and Validation
ERIC Educational Resources Information Center
Badri, Masood A.; Mohaidat, Jihad; Ferrandino, Vincent; El Mourad, Tarek
2013-01-01
The study empirically tests an integrative model of work satisfaction (0280, 0140, 0300 and 0255) in a sample of 5,022 teachers in Abu Dhabi in the United Arab Emirates. The study provided more support for the Lent and Brown (2006) model. Results revealed that this model was a strong fit for the data and accounted for 82% of the variance in work…
NASA Technical Reports Server (NTRS)
Syvertson, Clarence A; Gloria, Hermilo R; Sarabia, Michael F
1958-01-01
A study is made of aerodynamic performance and static stability and control at hypersonic speeds. In a first part of the study, the effect of interference lift is investigated by tests of asymmetric models having conical fuselages and arrow plan-form wings. The fuselage of the asymmetric model is located entirely beneath the wing and has a semicircular cross section. The fuselage of the symmetric model was centrally located and has a circular cross section. Results are obtained for Mach numbers from 3 to 12 in part by application of the hypersonic similarity rule. These results show a maximum effect of interference on lift-drag ratio occurring at Mach number of 5, the Mach number at which the asymmetric model was designed to exploit favorable lift interference. At this Mach number, the asymmetric model is indicated to have a lift-drag ratio 11 percent higher than the symmetric model and 15 percent higher than the asymmetric model when inverted. These differences decrease to a few percent at a Mach number of 12. In the course of this part of the study, the accuracy to the hypersonic similarity rule applied to wing-body combinations is demonstrated with experimental results. These results indicate that the rule may prove useful for determining the aerodynamic characteristics of slender configurations at Mach numbers higher than those for which test equipment is really available. In a second part of the study, the aerodynamic performance and static stability and control characteristics of a hypersonic glider are investigated in somewhat greater detail. Results for Mach numbers from 3 to 18 for performance and 0.6 to 12 for stability and control are obtained by standard text techniques, by application of the hypersonic stability rule, and/or by use of helium as a test medium. Lift-drag ratios of about 5 for Mach numbers up to 18 are shown to be obtainable. The glider studied is shown to have acceptable longitudinal and directional stability characteristics through the range of Mach numbers studied. Some roll instability (negative effective dihedral) is found at Mach numbers near 12.
Results of an Oncology Clinical Trial Nurse Role Delineation Study.
Purdom, Michelle A; Petersen, Sandra; Haas, Barbara K
2017-09-01
To evaluate the relevance of a five-dimensional model of clinical trial nursing practice in an oncology clinical trial nurse population. . Web-based cross-sectional survey. . Online via Qualtrics. . 167 oncology nurses throughout the United States, including 41 study coordinators, 35 direct care providers, and 91 dual-role nurses who provide direct patient care and trial coordination. . Principal components analysis was used to determine the dimensions of oncology clinical trial nursing practice. . Self-reported frequency of 59 activities. . The results did not support the original five-dimensional model of nursing care but revealed a more multidimensional model. . An analysis of frequency data revealed an eight-dimensional model of oncology research nursing, including care, manage study, expert, lead, prepare, data, advance science, and ethics. . This evidence-based model expands understanding of the multidimensional roles of oncology nurses caring for patients with cancer enrolled in clinical trials.
Development of a Skin Burn Predictive Model adapted to Laser Irradiation
NASA Astrophysics Data System (ADS)
Sonneck-Museux, N.; Scheer, E.; Perez, L.; Agay, D.; Autrique, L.
2016-12-01
Laser technology is increasingly used, and it is crucial for both safety and medical reasons that the impact of laser irradiation on human skin can be accurately predicted. This study is mainly focused on laser-skin interactions and potential lesions (burns). A mathematical model dedicated to heat transfers in skin exposed to infrared laser radiations has been developed. The model is validated by studying heat transfers in human skin and simultaneously performing experimentations an animal model (pig). For all experimental tests, pig's skin surface temperature is recorded. Three laser wavelengths have been tested: 808 nm, 1940 nm and 10 600 nm. The first is a diode laser producing radiation absorbed deep within the skin. The second wavelength has a more superficial effect. For the third wavelength, skin is an opaque material. The validity of the developed models is verified by comparison with experimental results (in vivo tests) and the results of previous studies reported in the literature. The comparison shows that the models accurately predict the burn degree caused by laser radiation over a wide range of conditions. The results show that the important parameter for burn prediction is the extinction coefficient. For the 1940 nm wavelength especially, significant differences between modeling results and literature have been observed, mainly due to this coefficient's value. This new model can be used as a predictive tool in order to estimate the amount of injury induced by several types (couple power-time) of laser aggressions on the arm, the face and on the palm of the hand.
Ultrastructural study of Rift Valley fever virus in the mouse model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, Christopher; Steele, Keith E.; Honko, Anna
Detailed ultrastructural studies of Rift Valley fever virus (RVFV) in the mouse model are needed to develop and characterize a small animal model of RVF for the evaluation of potential vaccines and therapeutics. In this study, the ultrastructural features of RVFV infection in the mouse model were analyzed. The main changes in the liver included the presence of viral particles in hepatocytes and hepatic stem cells accompanied by hepatocyte apoptosis. However, viral particles were observed rarely in the liver; in contrast, particles were extremely abundant in the CNS. Despite extensive lymphocytolysis, direct evidence of viral replication was not observed inmore » the lymphoid tissue. These results correlate with the acute-onset hepatitis and delayed-onset encephalitis that are dominant features of severe human RVF, but suggest that host immune-mediated mechanisms contribute significantly to pathology. The results of this study expand our knowledge of RVFV-host interactions and further characterize the mouse model of RVF.« less
A hierarchical (multicomponent) model of in-group identification: examining in Russian samples.
Lovakov, Andrey V; Agadullina, Elena R; Osin, Evgeny N
2015-06-03
The aim of this study was to examine the validity and reliability of Leach et al.'s (2008) model of in-group identification in two studies using Russian samples (overall N = 621). In Study 1, a series of multi-group confirmatory factor analysis revealed that the hierarchical model of in-group identification, which included two second-order factors, self-definition (individual self-stereotyping, and in-group homogeneity) and self-investment (satisfaction, solidarity, and centrality), fitted the data well for all four group identities (ethnic, religious, university, and gender) (CFI > .93, TLI > .92, RMSEA < .06, SRMR < .06) and demonstrated a better fit, compared to the alternative models. In Study 2, the construct validity and reliability of the Russian version of the in-group identification measure was examined. Results show that these measures have adequate psychometric properties. In short, our results show that Leach et al.'s model is reproduced in Russian culture. The Russian version of this measure can be recommended for use in future in-group research in Russian-speaking samples.
Method of Curved Models and Its Application to the Study of Curvilinear Flight of Airships. Part II
NASA Technical Reports Server (NTRS)
Gourjienko, G A
1937-01-01
This report compares the results obtained by the aid of curved models with the results of tests made by the method of damped oscillations, and with flight tests. Consequently we shall be able to judge which method of testing in the tunnel produces results that are in closer agreement with flight test results.
A population exposure model for particulate matter (PM), called the Stochastic Human Exposure and Dose Simulation (SHEDS-PM) model, has been developed and applied in a case study of daily PM2.5 exposures for the population living in Philadelphia, PA. SHEDS-PM is a probabilisti...
A Workforce Design Model: Providing Energy to Organizations in Transition
ERIC Educational Resources Information Center
Halm, Barry J.
2011-01-01
The purpose of this qualitative study was to examine the change in performance realized by a professional services organization, which resulted in the Life Giving Workforce Design (LGWD) model through a grounded theory research design. This study produced a workforce design model characterized as an organizational blueprint that provides virtuous…
The Influencing and Effective Model of Early Childhood: Teachers' Job Satisfaction in China
ERIC Educational Resources Information Center
Jiang, Yong
2005-01-01
The purpose of this study was to explore the influencing and effective models of Chinese early childhood teachers' job satisfaction. Using a questionnaire of 317 teachers from 21 kindergartens in Shanghai, China, the present study established the influencing and effective structure model of teachers' job satisfaction. The results demonstrated that…
Drama and Routine in the Public Schools.
ERIC Educational Resources Information Center
Pondy, Louis R.; Huff, Anne S.
A case study of curricular change compares two leading models of organizational change. One model stresses the uncertainty and disorder of major changes and views them as dramatic events. The other model sees major organizational shifts as the result of ordinary day-to-day processes and emphasizes their routine nature. For this study, the…
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
The Aggregation of Single-Case Results Using Hierarchical Linear Models
ERIC Educational Resources Information Center
Van den Noortgate, Wim; Onghena, Patrick
2007-01-01
To investigate the generalizability of the results of single-case experimental studies, evaluating the effect of one or more treatments, in applied research various simultaneous and sequential replication strategies are used. We discuss one approach for aggregating the results for single-cases: the use of hierarchical linear models. This approach…
NASA Technical Reports Server (NTRS)
Bacmeister, Julio T.; Suarez, Max J.; Einaudi, Franco (Technical Monitor)
2001-01-01
This is the first of a two part study examining the connection of the equatorial momentum budget in an AGCM (Atmospheric General Circulation Model), with simulated equatorial surface wind stresses over the Pacific. The AGCM used in this study forms part of a newly developed coupled forecasting system used at NASA's Seasonal- to-Interannual Prediction Project. Here we describe the model and present results from a 20-year (1979-1999) AMIP-type experiment forced with observed SSTs (Sea Surface Temperatures). Model results are compared them with available observational data sets. The climatological pattern of extra-tropical planetary waves as well as their ENSO-related variability is found to agree quite well with re-analysis estimates. The model's surface wind stress is examined in detail, and reveals a reasonable overall simulation of seasonal interannual variability, as well as seasonal mean distributions. However, an excessive annual oscillation in wind stress over the equatorial central Pacific is found. We examine the model's divergent circulation over the tropical Pacific and compare it with estimates based on re-analysis data. These comparisons are generally good, but reveal excessive upper-level convergence in the central Pacific. In Part II of this study a direct examination of individual terms in the AGCM's momentum budget is presented. We relate the results of this analysis to the model's simulation of surface wind stress.
Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa
2017-01-01
Atherogenesis, the formation of plaques in the wall of blood vessels, starts as a result of lipid accumulation (low-density lipoprotein cholesterol) in the vessel wall. Such accumulation is related to the site of endothelial mechanotransduction, the endothelial response to mechanical stimuli and haemodynamics, which determines biochemical processes regulating the vessel wall permeability. This interaction between biomechanical and biochemical phenomena is complex, spanning different biological scales and is patient-specific, requiring tools able to capture such mathematical and biological complexity in a unified framework. Mathematical models offer an elegant and efficient way of doing this, by taking into account multifactorial and multiscale processes and mechanisms, in order to capture the fundamentals of plaque formation in individual patients. In this study, a mathematical model to understand plaque and calcification locations is presented: this model provides a strong interpretability and physical meaning through a multiscale, complex index or metric (the penetration site of low-density lipoprotein cholesterol, expressed as volumetric flux). Computed tomography scans of the aortic bifurcation and iliac arteries are analysed and compared with the results of the multifactorial model. The results indicate that the model shows potential to predict the majority of the plaque locations, also not predicting regions where plaques are absent. The promising results from this case study provide a proof of concept that can be applied to a larger patient population. PMID:28427316
Statistical Time Series Models of Pilot Control with Applications to Instrument Discrimination
NASA Technical Reports Server (NTRS)
Altschul, R. E.; Nagel, P. M.; Oliver, F.
1984-01-01
A general description of the methodology used in obtaining the transfer function models and verification of model fidelity, frequency domain plots of the modeled transfer functions, numerical results obtained from an analysis of poles and zeroes obtained from z plane to s-plane conversions of the transfer functions, and the results of a study on the sequential introduction of other variables, both exogenous and endogenous into the loop are contained.
Yan, Xiaoyu; Lowe, Philip J.; Fink, Martin; Berghout, Alexander; Balser, Sigrid; Krzyzanski, Wojciech
2012-01-01
The aim of this study was to develop an integrated pharmacokinetic and pharmacodynamic (PK/PD) model and assess the comparability between epoetin alfa HEXAL/Binocrit (HX575) and a comparator epoetin alfa by a model-based approach. PK/PD data—including serum drug concentrations, reticulocyte counts, red blood cells, and hemoglobin levels—were obtained from 2 clinical studies. In sum, 149 healthy men received multiple intravenous or subcutaneous doses of HX575 (100 IU/kg) and the comparator 3 times a week for 4 weeks. A population model based on pharmacodynamics-mediated drug disposition and cell maturation processes was used to characterize the PK/PD data for the 2 drugs. Simulations showed that due to target amount changes, total clearance may increase up to 2.4-fold as compared with the baseline. Further simulations suggested that once-weekly and thrice-weekly subcutaneous dosing regimens would result in similar efficacy. The findings from the model-based analysis were consistent with previous results using the standard noncompartmental approach demonstrating PK/PD comparability between HX575 and comparator. However, due to complexity of the PK/PD model, control of random effects was not straightforward. Whereas population PK/PD model-based analyses are suited for studying complex biological systems, such models have their limitations (statistical), and their comparability results should be interpreted carefully. PMID:22162538
Biases in simulation of the rice phenology models when applied in warmer climates
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, T.; Yang, X.; Simelton, E.
2015-12-01
The current model inter-comparison studies highlight the difference in projections between crop models when they are applied to warmer climates, but these studies do not provide results on how the accuracy of the models would change in these projections because the adequate observations under largely diverse growing season temperature (GST) are often unavailable. Here, we investigate the potential changes in the accuracy of rice phenology models when these models were applied to a significantly warmer climate. We collected phenology data from 775 trials with 19 cultivars in 5 Asian countries (China, India, Philippines, Bangladesh and Thailand). Each cultivar encompasses the phenology observations under diverse GST regimes. For a given rice cultivar in different trials, the GST difference reaches 2.2 to 8.2°C, which allows us to calibrate the models under lower GST and validate under higher GST (i.e., warmer climates). Four common phenology models representing major algorithms on simulations of rice phenology, and three model calibration experiments were conducted. The results suggest that the bilinear and beta models resulted in gradually increasing phenology bias (Figure) and double yield bias per percent increase in phenology bias, whereas the growing-degree-day (GDD) and exponential models maintained a comparatively constant bias when applied in warmer climates (Figure). Moreover, the bias of phenology estimated by the bilinear and beta models did not reduce with increase in GST when all data were used to calibrate models. These suggest that variations in phenology bias are primarily attributed to intrinsic properties of the respective phenology model rather than on the calibration dataset. Therefore we conclude that using the GDD and exponential models has more chances of predicting rice phenology correctly and thus, production under warmer climates, and result in effective agricultural strategic adaptation to and mitigation of climate change.
Walden, Anita; Nahm, Meredith; Barnett, M. Edwina; Conde, Jose G.; Dent, Andrew; Fadiel, Ahmed; Perry, Theresa; Tolk, Chris; Tcheng, James E.; Eisenstein, Eric L.
2012-01-01
Background New data management models are emerging in multi-center clinical studies. We evaluated the incremental costs associated with decentralized vs. centralized models. Methods We developed clinical research network economic models to evaluate three data management models: centralized, decentralized with local software, and decentralized with shared database. Descriptive information from three clinical research studies served as inputs for these models. Main Outcome Measures The primary outcome was total data management costs. Secondary outcomes included: data management costs for sites, local data centers, and central coordinating centers. Results Both decentralized models were more costly than the centralized model for each clinical research study: the decentralized with local software model was the most expensive. Decreasing the number of local data centers and case book pages reduced cost differentials between models. Conclusion Decentralized vs. centralized data management in multi-center clinical research studies is associated with increases in data management costs. PMID:21335692
RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA
Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...
NASA Astrophysics Data System (ADS)
Leitão, J. P.; de Sousa, L. M.
2018-06-01
Newly available, more detailed and accurate elevation data sets, such as Digital Elevation Models (DEMs) generated on the basis of imagery from terrestrial LiDAR (Light Detection and Ranging) systems or Unmanned Aerial Vehicles (UAVs), can be used to improve flood-model input data and consequently increase the accuracy of the flood modelling results. This paper presents the first application of the MBlend merging method and assesses the impact of combining different DEMs on flood modelling results. It was demonstrated that different raster merging methods can have different and substantial impacts on these results. In addition to the influence associated with the method used to merge the original DEMs, the magnitude of the impact also depends on (i) the systematic horizontal and vertical differences of the DEMs, and (ii) the orientation between the DEM boundary and the terrain slope. The greater water depth and flow velocity differences between the flood modelling results obtained using the reference DEM and the merged DEMs ranged from -9.845 to 0.002 m, and from 0.003 to 0.024 m s-1 respectively; these differences can have a significant impact on flood hazard estimates. In most of the cases investigated in this study, the differences from the reference DEM results were smaller for the MBlend method than for the results of the two conventional methods. This study highlighted the importance of DEM merging when conducting flood modelling and provided hints on the best DEM merging methods to use.
Novel model of a AlGaN/GaN high electron mobility transistor based on an artificial neural network
NASA Astrophysics Data System (ADS)
Cheng, Zhi-Qun; Hu, Sha; Liu, Jun; Zhang, Qi-Jun
2011-03-01
In this paper we present a novel approach to modeling AlGaN/GaN high electron mobility transistor (HEMT) with an artificial neural network (ANN). The AlGaN/GaN HEMT device structure and its fabrication process are described. The circuit-based Neuro-space mapping (neuro-SM) technique is studied in detail. The EEHEMT model is implemented according to the measurement results of the designed device, which serves as a coarse model. An ANN is proposed to model AlGaN/GaN HEMT based on the coarse model. Its optimization is performed. The simulation results from the model are compared with the measurement results. It is shown that the simulation results obtained from the ANN model of AlGaN/GaN HEMT are more accurate than those obtained from the EEHEMT model. Project supported by the National Natural Science Foundation of China (Grant No. 60776052).
Beta decay rates of neutron-rich nuclei
NASA Astrophysics Data System (ADS)
Marketin, Tomislav; Huther, Lutz; Martínez-Pinedo, Gabriel
2015-10-01
Heavy element nucleosynthesis models involve various properties of thousands of nuclei in order to simulate the intricate details of the process. By necessity, as most of these nuclei cannot be studied in a controlled environment, these models must rely on the nuclear structure models for input. Of all the properties, the beta-decay half-lives are one of the most important ones due to their direct impact on the resulting abundance distributions. Currently, a single large-scale calculation is available based on a QRPA calculation with a schematic interaction on top of the Finite Range Droplet Model. In this study we present the results of a large-scale calculation based on the relativistic nuclear energy density functional, where both the allowed and the first-forbidden transitions are studied in more than 5000 neutron-rich nuclei.
NASA Astrophysics Data System (ADS)
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly correlated auxiliary variables does not improve the quality of the spatial model. The effects of introduction of certain variables into the model were not climatologically justified and were seen on maps as unexpected and undesired artefacts. The results confirm, in accordance with previous studies, that in the case of air temperature distribution, the spatial process is non-stationary; thus, the local GWR model performs better than the global MLR if they are specified using the same set of auxiliary variables. If only GWR residuals are autocorrelated, the geographically weighted regression-kriging (GWRK) model seems to be optimal for air temperature spatial interpolation.
Allen, John M; Elbasiouny, Sherif M
2018-06-01
Computational models often require tradeoffs, such as balancing detail with efficiency; yet optimal balance should incorporate sound design features that do not bias the results of the specific scientific question under investigation. The present study examines how model design choices impact simulation results. We developed a rigorously-validated high-fidelity computational model of the spinal motoneuron pool to study three long-standing model design practices which have yet to be examined for their impact on motoneuron recruitment, firing rate, and force simulations. The practices examined were the use of: (1) generic cell models to simulate different motoneuron types, (2) discrete property ranges for different motoneuron types, and (3) biological homogeneity of cell properties within motoneuron types. Our results show that each of these practices accentuates conditions of motoneuron recruitment based on the size principle, and minimizes conditions of mixed and reversed recruitment orders, which have been observed in animal and human recordings. Specifically, strict motoneuron orderly size recruitment occurs, but in a compressed range, after which mixed and reverse motoneuron recruitment occurs due to the overlap in electrical properties of different motoneuron types. Additionally, these practices underestimate the motoneuron firing rates and force data simulated by existing models. Our results indicate that current modeling practices increase conditions of motoneuron recruitment based on the size principle, and decrease conditions of mixed and reversed recruitment order, which, in turn, impacts the predictions made by existing models on motoneuron recruitment, firing rate, and force. Additionally, mixed and reverse motoneuron recruitment generated higher muscle force than orderly size motoneuron recruitment in these simulations and represents one potential scheme to increase muscle efficiency. The examined model design practices, as well as the present results, are applicable to neuronal modeling throughout the nervous system.
NASA Astrophysics Data System (ADS)
Randles, C. A.; Kinne, S.; Myhre, G.; Schulz, M.; Stier, P.; Fischer, J.; Doppler, L.; Highwood, E.; Ryder, C.; Harris, B.; Huttunen, J.; Ma, Y.; Pinker, R. T.; Mayer, B.; Neubauer, D.; Hitzenberger, R.; Oreopoulos, L.; Lee, D.; Pitari, G.; Di Genova, G.; Quaas, J.; Rose, Fred G.; Kato, S.; Rumbold, S. T.; Vardavas, I.; Hatzianastassiou, N.; Matsoukas, C.; Yu, H.; Zhang, F.; Zhang, H.; Lu, P.
2012-12-01
In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere), with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly -10 to 20%, with over- and underestimates of radiative cooling at higher and lower sun elevation, respectively. Inter-model diversity (relative standard deviation) increases from ~10 to 15% as sun elevation increases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.
NASA Astrophysics Data System (ADS)
Randles, C. A.; Kinne, S.; Myhre, G.; Schulz, M.; Stier, P.; Fischer, J.; Doppler, L.; Highwood, E.; Ryder, C.; Harris, B.; Huttunen, J.; Ma, Y.; Pinker, R. T.; Mayer, B.; Neubauer, D.; Hitzenberger, R.; Oreopoulos, L.; Lee, D.; Pitari, G.; Di Genova, G.; Quaas, J.; Rose, F. G.; Kato, S.; Rumbold, S. T.; Vardavas, I.; Hatzianastassiou, N.; Matsoukas, C.; Yu, H.; Zhang, F.; Zhang, H.; Lu, P.
2013-03-01
In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere), with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly -10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation) increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.
Air Quality Modeling of Traffic-related Air Pollutants for the NEXUS Study
The paper presents the results of the model applications to estimate exposure metrics in support of an epidemiologic study in Detroit, Michigan. A major challenge in traffic-related air pollution exposure studies is the lack of information regarding pollutant exposure characteriz...
NASA Astrophysics Data System (ADS)
Al-Alawi, Baha Mohammed
Plug-in hybrid electric vehicles (PHEVs) are an emerging automotive technology that has the capability to reduce transportation environmental impacts, but at an increased production cost. PHEVs can draw and store energy from an electric grid and consequently show reductions in petroleum consumption, air emissions, ownership costs, and regulation compliance costs, and various other externalities. Decision makers in the policy, consumer, and industry spheres would like to understand the impact of HEV and PHEV technologies on the U.S. vehicle fleets, but to date, only the disciplinary characteristics of PHEVs been considered. The multidisciplinary tradeoffs between vehicle energy sources, policy requirements, market conditions, consumer preferences and technology improvements are not well understood. For example, the results of recent studies have posited the importance of PHEVs to the future US vehicle fleet. No studies have considered the value of PHEVs to automakers and policy makers as a tool for achieving US corporate average fuel economy (CAFE) standards which are planned to double by 2030. Previous studies have demonstrated the cost and benefit of PHEVs but there is no study that comprehensively accounts for the cost and benefits of PHEV to consumers. The diffusion rate of hybrid electric vehicle (HEV) and PHEV technology into the marketplace has been estimated by existing studies using various tools and scenarios, but results show wide variations between studies. There is no comprehensive modeling study that combines policy, consumers, society and automakers in the U.S. new vehicle sales cost and benefits analysis. The aim of this research is to build a potential framework that can simulate and optimize the benefits of PHEVs for a multiplicity of stakeholders. This dissertation describes the results of modeling that integrates the effects of PHEV market penetration on policy, consumer and economic spheres. A model of fleet fuel economy and CAFE compliance for a large US automaker will be developed. A comprehensive total cost of ownership model will be constructed to calculate and compare the cost and benefits of PHEVs, conventional vehicles (CVs) and HEVs. Then a comprehensive literature review of PHEVs penetration rate studies will be developed to review and analyze the primary purposes, methods, and results of studies of PHEV market penetration. Finally a multi-criteria modeling system will incorporate results of the support model results. In this project, the models, analysis and results will provide a broader understanding of the benefits and costs of PHEV technology and the parties to whom those benefits accrue. The findings will provide important information for consumers, automakers and policy makers to understand and define HEVs and PHEVs costs, benefits, expected penetration rate and the preferred vehicle design and technology scenario to meet the requirements of policy, society, industry and consumers.
A stochastic automata network for earthquake simulation and hazard estimation
NASA Astrophysics Data System (ADS)
Belubekian, Maya Ernest
1998-11-01
This research develops a model for simulation of earthquakes on seismic faults with available earthquake catalog data. The model allows estimation of the seismic hazard at a site of interest and assessment of the potential damage and loss in a region. There are two approaches for studying the earthquakes: mechanistic and stochastic. In the mechanistic approach, seismic processes, such as changes in stress or slip on faults, are studied in detail. In the stochastic approach, earthquake occurrences are simulated as realizations of a certain stochastic process. In this dissertation, a stochastic earthquake occurrence model is developed that uses the results from dislocation theory for the estimation of slip released in earthquakes. The slip accumulation and release laws and the event scheduling mechanism adopted in the model result in a memoryless Poisson process for the small and moderate events and in a time- and space-dependent process for large events. The minimum and maximum of the hazard are estimated by the model when the initial conditions along the faults correspond to a situation right after a largest event and after a long seismic gap, respectively. These estimates are compared with the ones obtained from a Poisson model. The Poisson model overestimates the hazard after the maximum event and underestimates it in the period of a long seismic quiescence. The earthquake occurrence model is formulated as a stochastic automata network. Each fault is divided into cells, or automata, that interact by means of information exchange. The model uses a statistical method called bootstrap for the evaluation of the confidence bounds on its results. The parameters of the model are adjusted to the target magnitude patterns obtained from the catalog. A case study is presented for the city of Palo Alto, where the hazard is controlled by the San Andreas, Hayward and Calaveras faults. The results of the model are used to evaluate the damage and loss distribution in Palo Alto. The sensitivity analysis of the model results to the variation in basic parameters shows that the maximum magnitude has the most significant impact on the hazard, especially for long forecast periods.
Computational State Space Models for Activity and Intention Recognition. A Feasibility Study
Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas
2014-01-01
Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138
Non-susceptible landslide areas in Italy and in the Mediterranean region
NASA Astrophysics Data System (ADS)
Marchesini, I.; Ardizzone, F.; Alvioli, M.; Rossi, M.; Guzzetti, F.
2014-08-01
We used landslide information for 13 study areas in Italy and morphometric information obtained from the 3-arcseconds shuttle radar topography mission digital elevation model (SRTM DEM) to determine areas where landslide susceptibility is expected to be negligible in Italy and in the landmasses surrounding the Mediterranean Sea. The morphometric information consisted of the local terrain slope which was computed in a square 3 × 3-cell moving window, and in the regional relative relief computed in a circular 15 × 15-cell moving window. We tested three different models to classify the "non-susceptible" landslide areas, including a linear model (LNR), a quantile linear model (QLR), and a quantile, non-linear model (QNL). We tested the performance of the three models using independent landslide information presented by the Italian Landslide Inventory (Inventario Fenomeni Franosi in Italia - IFFI). Best results were obtained using the QNL model. The corresponding zonation of non-susceptible landslide areas was intersected in a geographic information system (GIS) with geographical census data for Italy. The result determined that 57.5% of the population of Italy (in 2001) was located in areas where landslide susceptibility is expected to be negligible. We applied the QNL model to the landmasses surrounding the Mediterranean Sea, and we tested the synoptic non-susceptibility zonation using independent landslide information for three study areas in Spain. Results showed that the QNL model was capable of determining where landslide susceptibility is expected to be negligible in the validation areas in Spain. We expect our results to be applicable in similar study areas, facilitating the identification of non-susceptible landslide areas, at the synoptic scale.
Use of a Latent Topic Model for Characteristic Extraction from Health Checkup Questionnaire Data.
Hatakeyama, Y; Miyano, I; Kataoka, H; Nakajima, N; Watabe, T; Yasuda, N; Okuhara, Y
2015-01-01
When patients complete questionnaires during health checkups, many of their responses are subjective, making topic extraction difficult. Therefore, the purpose of this study was to develop a model capable of extracting appropriate topics from subjective data in questionnaires conducted during health checkups. We employed a latent topic model to group the lifestyle habits of the study participants and represented their responses to items on health checkup questionnaires as a probability model. For the probability model, we used latent Dirichlet allocation to extract 30 topics from the questionnaires. According to the model parameters, a total of 4381 study participants were then divided into groups based on these topics. Results from laboratory tests, including blood glucose level, triglycerides, and estimated glomerular filtration rate, were compared between each group, and these results were then compared with those obtained by hierarchical clustering. If a significant (p < 0.05) difference was observed in any of the laboratory measurements between groups, it was considered to indicate a questionnaire response pattern corresponding to the value of the test result. A comparison between the latent topic model and hierarchical clustering grouping revealed that, in the latent topic model method, a small group of participants who reported having subjective signs of urinary disorder were allocated to a single group. The latent topic model is useful for extracting characteristics from a small number of groups from questionnaires with a large number of items. These results show that, in addition to chief complaints and history of past illness, questionnaire data obtained during medical checkups can serve as useful judgment criteria for assessing the conditions of patients.
Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul
2015-11-04
Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.
Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David
2015-01-01
Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908
Tabrizi, Jafar-Sadegh; Farahbakhsh, Mostafa; Shahgoli, Javad; Rahbar, Mohammad Reza; Naghavi-Behzad, Mohammad; Ahadi, Hamid-Reza; Azami-Aghdash, Saber
2015-10-01
Excellence and quality models are comprehensive methods for improving the quality of healthcare. The aim of this study was to design excellence and quality model for training centers of primary health care using Delphi method. In this study, Delphi method was used. First, comprehensive information were collected using literature review. In extracted references, 39 models were identified from 34 countries and related sub-criteria and standards were extracted from 34 models (from primary 39 models). Then primary pattern including 8 criteria, 55 sub-criteria, and 236 standards was developed as a Delphi questionnaire and evaluated in four stages by 9 specialists of health care system in Tabriz and 50 specialists from all around the country. Designed primary model (8 criteria, 55 sub-criteria, and 236 standards) were concluded with 8 criteria, 45 sub-criteria, and 192 standards after 4 stages of evaluations by specialists. Major criteria of the model are leadership, strategic and operational planning, resource management, information analysis, human resources management, process management, costumer results, and functional results, where the top score was assigned as 1000 by specialists. Functional results had the maximum score of 195 whereas planning had the minimum score of 60. Furthermore the most and the least sub-criteria was for leadership with 10 sub-criteria and strategic planning with 3 sub-criteria, respectively. The model that introduced in this research has been designed following 34 reference models of the world. This model could provide a proper frame for managers of health system in improving quality.
Al-Quwaidhi, Abdulkareem J.; Pearce, Mark S.; Sobngwi, Eugene; Critchley, Julia A.; O’Flaherty, Martin
2014-01-01
Aims To compare the estimates and projections of type 2 diabetes mellitus (T2DM) prevalence in Saudi Arabia from a validated Markov model against other modelling estimates, such as those produced by the International Diabetes Federation (IDF) Diabetes Atlas and the Global Burden of Disease (GBD) project. Methods A discrete-state Markov model was developed and validated that integrates data on population, obesity and smoking prevalence trends in adult Saudis aged ≥25 years to estimate the trends in T2DM prevalence (annually from 1992 to 2022). The model was validated by comparing the age- and sex-specific prevalence estimates against a national survey conducted in 2005. Results Prevalence estimates from this new Markov model were consistent with the 2005 national survey and very similar to the GBD study estimates. Prevalence in men and women in 2000 was estimated by the GBD model respectively at 17.5% and 17.7%, compared to 17.7% and 16.4% in this study. The IDF estimates of the total diabetes prevalence were considerably lower at 16.7% in 2011 and 20.8% in 2030, compared with 29.2% in 2011 and 44.1% in 2022 in this study. Conclusion In contrast to other modelling studies, both the Saudi IMPACT Diabetes Forecast Model and the GBD model directly incorporated the trends in obesity prevalence and/or body mass index (BMI) to inform T2DM prevalence estimates. It appears that such a direct incorporation of obesity trends in modelling studies results in higher estimates of the future prevalence of T2DM, at least in countries where obesity has been rapidly increasing. PMID:24447810
2017-01-01
Several reactions, known from other amine systems for CO2 capture, have been proposed for Lewatit R VP OC 1065. The aim of this molecular modeling study is to elucidate the CO2 capture process: the physisorption process prior to the CO2-capture and the reactions. Molecular modeling yields that the resin has a structure with benzyl amine groups on alternating positions in close vicinity of each other. Based on this structure, the preferred adsorption mode of CO2 and H2O was established. Next, using standard Density Functional Theory two catalytic reactions responsible for the actual CO2 capture were identified: direct amine and amine-H2O catalyzed formation of carbamic acid. The latter is a new type of catalysis. Other reactions are unlikely. Quantitative verification of the molecular modeling results with known experimental CO2 adsorption isotherms, applying a dual site Langmuir adsorption isotherm model, further supports all results of this molecular modeling study. PMID:29142339
NASA Technical Reports Server (NTRS)
Shie, Chung-Lin; Tao, Wei-Kuo; Johnson, Dan; Simpson, Joanne; Li, Xiaofan; Sui, Chung-Hsiung; Einaudi, Franco (Technical Monitor)
2001-01-01
Coupling a cloud resolving model (CRM) with an ocean mixed layer (OML) model can provide a powerful tool for better understanding impacts of atmospheric precipitation on sea surface temperature (SST) and salinity. The objective of this study is twofold. First, by using the three dimensional (3-D) CRM-simulated (the Goddard Cumulus Ensemble model, GCE) diabatic source terms, radiation (longwave and shortwave), surface fluxes (sensible and latent heat, and wind stress), and precipitation as input for the OML model, the respective impact of individual component on upper ocean heat and salt budgets are investigated. Secondly, a two-way air-sea interaction between tropical atmospheric climates (involving atmospheric radiative-convective processes) and upper ocean boundary layer is also examined using a coupled two dimensional (2-D) GCE and OML model. Results presented here, however, only involve the first aspect. Complete results will be presented at the conference.
A numerical study of linear and nonlinear kinematic models in fish swimming with the DSD/SST method
NASA Astrophysics Data System (ADS)
Tian, Fang-Bao
2015-03-01
Flow over two fish (modeled by two flexible plates) in tandem arrangement is investigated by solving the incompressible Navier-Stokes equations numerically with the DSD/SST method to understand the differences between the geometrically linear and nonlinear models. In the simulation, the motions of the plates are reconstructed from a vertically flowing soap film tunnel experiment with linear and nonlinear kinematic models. Based on the simulations, the drag, lift, power consumption, vorticity and pressure fields are discussed in detail. It is found that the linear and nonlinear models are able to reasonably predict the forces and power consumption of a single plate in flow. Moreover, if multiple plates are considered, these two models yield totally different results, which implies that the nonlinear model should be used. The results presented in this work provide a guideline for future studies in fish swimming.
Study on the CO2 electric driven fixed swash plate type compressor for eco-friendly vehicles
NASA Astrophysics Data System (ADS)
Nam, Donglim; Kim, Kitae; Lee, Jehie; Kwon, Yunki; Lee, Geonho
2017-08-01
The purpose of this study is to experiment and to performance analysis about the electric-driven fixed swash plate compressor using alternate refrigerant(R744). Comprehensive simulation model for an electric driven compressor using CO2 for eco-friendly vehicle is presented. This model consists of compression model and dynamic model. The compression model included valve dynamics, leakage, and heat transfer models. And the dynamic model included frictional loss between piston ring and cylinder wall, frictional loss between shoe and swash plate, frictional loss of bearings, and electric efficiency. Especially, because the efficiency of an electric parts(motor and inverter) in the compressor affects the loss of the compressor, the dynamo test was performed. We made the designed compressor, and tested the performance of the compressor about the variety pressure conditions. Also we compared the performance analysis result and performance test result.
Herding, minority game, market clearing and efficient markets in a simple spin model framework
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav; Vosvrda, Miloslav
2018-01-01
We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.
A kinetic model of municipal sludge degradation during non-catalytic wet oxidation.
Prince-Pike, Arrian; Wilson, David I; Baroutian, Saeid; Andrews, John; Gapes, Daniel J
2015-12-15
Wet oxidation is a successful process for the treatment of municipal sludge. In addition, the resulting effluent from wet oxidation is a useful carbon source for subsequent biological nutrient removal processes in wastewater treatment. Owing to limitations with current kinetic models, this study produced a kinetic model which predicts the concentrations of key intermediate components during wet oxidation. The model was regressed from lab-scale experiments and then subsequently validated using data from a wet oxidation pilot plant. The model was shown to be accurate in predicting the concentrations of each component, and produced good results when applied to a plant 500 times larger in size. A statistical study was undertaken to investigate the validity of the regressed model parameters. Finally the usefulness of the model was demonstrated by suggesting optimum operating conditions such that volatile fatty acids were maximised. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mechanics of airflow in the human nasal airways.
Doorly, D J; Taylor, D J; Schroter, R C
2008-11-30
The mechanics of airflow in the human nasal airways is reviewed, drawing on the findings of experimental and computational model studies. Modelling inevitably requires simplifications and assumptions, particularly given the complexity of the nasal airways. The processes entailed in modelling the nasal airways (from defining the model, to its production and, finally, validating the results) is critically examined, both for physical models and for computational simulations. Uncertainty still surrounds the appropriateness of the various assumptions made in modelling, particularly with regard to the nature of flow. New results are presented in which high-speed particle image velocimetry (PIV) and direct numerical simulation are applied to investigate the development of flow instability in the nasal cavity. These illustrate some of the improved capabilities afforded by technological developments for future model studies. The need for further improvements in characterising airway geometry and flow together with promising new methods are briefly discussed.
Jet production and fragmentation properties in deep inelastic muon scattering
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badelek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Conrad, J.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Drobnitzki, M.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftàčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffre, M.; Jacholkowska, A.; Janata, F.; Jancso, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, A.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettingale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlabböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.; Ziemons, K.
1987-12-01
Results are presented from a study of deep inelastic 280 GeV muon-nucleon interactions on the transverse momenta and jet properties of the final state hadrons. The results are analysed in a way which attempts to separate the contributions of hard and soft QCD effects from those that arise from the fragmentation process. The fragmentation models with which the data are compared are the Lund string model, the independent jet model, the QCD parton shower model including soft gluon interference effects, and the firestring model. The discrimination between these models is discussed. Various methods of analysis of the data in terms of hard QCD processes are presented. From a study of the properties of the jet profiles a value of α s , to leading order, is determined using the Lund string model, namely α s =0.29±0.01 (stat.) ±0.02 (syst.), for Q 2˜20 GeV2.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2016-06-01
Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.
NASA Astrophysics Data System (ADS)
Hazra, Gopal
2018-02-01
In this thesis, various studies leading to better understanding of the 11-year solar cycle and its theoretical modeling with the flux transport dynamo model are performed. Although this is primarily a theoretical thesis, there is a part dealing with the analysis of observational data. The various proxies of solar activity (e.g., sunspot number, sunspot area and 10.7 cm radio flux) from various observatory including the sunspot area records of Kodaikanal Observatory have been analyzed to study the irregular aspects of solar cycles and an analysis has been carried out on the correlation between the decay rate and the next cycle amplitude. The theoretical analysis starts with explaining how the magnetic buoyancy has been treated in the flux transport dynamo models, and advantages and disadvantages of different treatments. It is found that some of the irregular properties of the solar cycle in the decaying phase can only be well explained using a particular treatment of the magnetic buoyancy. Next, the behavior of the dynamo with the different spatial structures of the meridional flow based on recent helioseismology results has been studied. A theoretical model is constructed considering the back reaction due to the Lorentz force on the meridional flows which explains the observed variation of the meridional flow with the solar cycle. Finally, some results with 3D FTD models are presented. This 3D model is developed to handle the Babcock-Leighton mechanism and magnetic buoyancy more realistically than previous 2D models and can capture some important effects connected with the subduction of the magnetic field in polar regions, which are missed in 2D surface flux transport models. This 3D model is further used to study the evolution of the magnetic fields due to a turbulent non-axisymmetric velocity field and to compare the results with the results obtained by using a simple turbulent diffusivity coefficient.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The primary tasks performed are: (1) the development of a second order local thermodynamic nonequilibrium (LTNE) model for atoms; (2) the continued development of vibrational nonequilibrium models; and (3) the development of a new multicomponent diffusion model. In addition, studies comparing these new models with previous models and results were conducted and reported.
ERIC Educational Resources Information Center
Treagust, David F.; Chittleborough, Gail D.; Mamiala, Thapelo L.
2004-01-01
The purpose of the study was to investigate secondary students' understanding of the descriptive and predictive nature of teaching models used in representing compounds in introductory organic chemistry. Of interest were the relationships between teaching models, scientific models, and students' mental models and expressed models. The results from…
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Phenomenological and molecular-level Petri net modeling and simulation of long-term potentiation.
Hardy, S; Robillard, P N
2005-10-01
Petri net-based modeling methods have been used in many research projects to represent biological systems. Among these, the hybrid functional Petri net (HFPN) was developed especially for biological modeling in order to provide biologists with a more intuitive Petri net-based method. In the literature, HFPNs are used to represent kinetic models at the molecular level. We present two models of long-term potentiation previously represented by differential equations which we have transformed into HFPN models: a phenomenological synapse model and a molecular-level model of the CaMKII regulation pathway. Through simulation, we obtained results similar to those of previous studies using these models. Our results open the way to a new type of modeling for systems biology where HFPNs are used to combine different levels of abstraction within one model. This approach can be useful in fully modeling a system at the molecular level when kinetic data is missing or when a full study of a system at the molecular level it is not within the scope of the research.
Douglas, Steven; Dixon, Barnali; Griffin, Dale W.
2018-01-01
With continued population growth and increasing use of fresh groundwater resources, protection of this valuable resource is critical. A cost effective means to assess risk of groundwater contamination potential will provide a useful tool to protect these resources. Integrating geospatial methods offers a means to quantify the risk of contaminant potential in cost effective and spatially explicit ways. This research was designed to compare the ability of intrinsic (DRASTIC) and specific (Attenuation Factor; AF) vulnerability models to indicate groundwater vulnerability areas by comparing model results to the presence of pesticides from groundwater sample datasets. A logistic regression was used to assess the relationship between the environmental variables and the presence or absence of pesticides within regions of varying vulnerability. According to the DRASTIC model, more than 20% of the study area is very highly vulnerable. Approximately 30% is very highly vulnerable according to the AF model. When groundwater concentrations of individual pesticides were compared to model predictions, the results were mixed. Model predictability improved when concentrations of the group of similar pesticides were compared to model results. Compared to the DRASTIC model, the AF model more accurately predicts the distribution of the number of contaminated wells within each vulnerability class.
NASA Astrophysics Data System (ADS)
Bring, Arvid; Asokan, Shilpa M.; Jaramillo, Fernando; Jarsjö, Jerker; Levi, Lea; Pietroń, Jan; Prieto, Carmen; Rogberg, Peter; Destouni, Georgia
2015-06-01
The multimodel ensemble of the Coupled Model Intercomparison Project, Phase 5 (CMIP5) synthesizes the latest research in global climate modeling. The freshwater system on land, particularly runoff, has so far been of relatively low priority in global climate models, despite the societal and ecosystem importance of freshwater changes, and the science and policy needs for such model output on drainage basin scales. Here we investigate the implications of CMIP5 multimodel ensemble output data for the freshwater system across a set of drainage basins in the Northern Hemisphere. Results of individual models vary widely, with even ensemble mean results differing greatly from observations and implying unrealistic long-term systematic changes in water storage and level within entire basins. The CMIP5 projections of basin-scale freshwater fluxes differ considerably more from observations and among models for the warm temperate study basins than for the Arctic and cold temperate study basins. In general, the results call for concerted research efforts and model developments for improving the understanding and modeling of the freshwater system and its change drivers. Specifically, more attention to basin-scale water flux analyses should be a priority for climate model development, and an important focus for relevant model-based advice for adaptation to climate change.
Cleather, D I; Bull, A M J
2010-01-01
The calculation of the patellofemoral joint contact force using three-dimensional (3D) modelling techniques requires a description of the musculoskeletal geometry of the lower limb. In this study, the influence of the complexity of the muscle model was studied by considering two different muscle models, the Delp and Horsman models. Both models were used to calculate the patellofemoral force during standing, vertical jumping, and Olympic-style weightlifting. The patellofemoral forces predicted by the Horsman model were markedly lower than those predicted by the Delp model in all activities and represented more realistic values when compared with previous work. This was found to be a result of a lower level of redundancy in the Delp model, which forced a higher level of muscular activation in order to allow a viable solution. The higher level of complexity in the Horsman model resulted in a greater degree of redundancy and consequently lower activation and patellofemoral forces. The results of this work demonstrate that a well-posed muscle model must have an adequate degree of complexity to create a sufficient independence, variability, and number of moment arms in order to ensure adequate redundancy of the force-sharing problem such that muscle forces are not overstated.
NASA Astrophysics Data System (ADS)
Wang, Liang-Jie; Sawada, Kazuhide; Moriguchi, Shuji
2013-01-01
To mitigate the damage caused by landslide disasters, different mathematical models have been applied to predict landslide spatial distribution characteristics. Although some researchers have achieved excellent results around the world, few studies take the spatial resolution of the database into account. Four types of digital elevation model (DEM) ranging from 2 to 20 m derived from light detection and ranging technology to analyze landslide susceptibility in Mizunami City, Gifu Prefecture, Japan, are presented. Fifteen landslide-causative factors are considered using a logistic-regression approach to create models for landslide potential analysis. Pre-existing landslide bodies are used to evaluate the performance of the four models. The results revealed that the 20-m model had the highest classification accuracy (71.9%), whereas the 2-m model had the lowest value (68.7%). In the 2-m model, 89.4% of the landslide bodies fit in the medium to very high categories. For the 20-m model, only 83.3% of the landslide bodies were concentrated in the medium to very high classes. When the cell size decreases from 20 to 2 m, the area under the relative operative characteristic increases from 0.68 to 0.77. Therefore, higher-resolution DEMs would provide better results for landslide-susceptibility mapping.
Helgason, Benedikt; Viceconti, Marco; Rúnarsson, Tómas P; Brynjólfsson, Sigurour
2008-01-01
Pushout tests can be used to estimate the shear strength of the bone implant interface. Numerous such experimental studies have been published in the literature. Despite this researchers are still some way off with respect to the development of accurate numerical models to simulate implant stability. In the present work a specific experimental pushout study from the literature was simulated using two different bones implant interface models. The implant was a porous coated Ti-6Al-4V retrieved 4 weeks postoperatively from a dog model. The purpose was to find out which of the interface models could replicate the experimental results using physically meaningful input parameters. The results showed that a model based on partial bone ingrowth (ingrowth stability) is superior to an interface model based on friction and prestressing due to press fit (initial stability). Even though the present study is limited to a single experimental setup, the authors suggest that the presented methodology can be used to investigate implant stability from other experimental pushout models. This would eventually enhance the much needed understanding of the mechanical response of the bone implant interface and help to quantify how implant stability evolves with time.
Gottfredson, Nisha C; Bauer, Daniel J; Baldwin, Scott A; Okiishi, John C
2014-10-01
This study demonstrates how to use a shared parameter mixture model (SPMM) in longitudinal psychotherapy studies to accommodate missingness that is due to a correlation between rate of improvement and termination of therapy. Traditional growth models assume that such a relationship does not exist (i.e., assume that data are missing at random) and produce biased results if this assumption is incorrect. We used longitudinal data from 4,676 patients enrolled in a naturalistic study of psychotherapy to compare results from a latent growth model and an SPMM. In this data set, estimates of the rate of improvement during therapy differed by 6.50%-6.66% across the two models, indicating that participants with steeper trajectories left psychotherapy earliest, thereby potentially biasing inference for the slope in the latent growth model. We conclude that reported estimates of change during therapy may be underestimated in naturalistic studies of therapy in which participants and their therapists determine the end of treatment. Because non-randomly missing data can also occur in randomized controlled trials or in observational studies of development, the utility of the SPMM extends beyond naturalistic psychotherapy data. PsycINFO Database Record (c) 2014 APA, all rights reserved.
James, Susan; Harris, Sara; Foster, Gary; Clarke, Juanne; Gadermann, Anne; Morrison, Marie; Bezanson, Birdie Jane
2013-01-01
This article outlines a model for conducting psychotherapy with people of diverse cultural backgrounds. The theoretical foundation for the model is based on clinical and cultural psychology. Cultural psychology integrates psychology and anthropology in order to provide a complex understanding of both culture and the individual within his or her cultural context. The model proposed in this article is also based on our clinical experience and mixed-method research with the Portuguese community. The model demonstrates its value with ethnic minority clients by situating the clients within the context of their multi-layered social reality. The individual, familial, socio-cultural, and religio-moral domains are explored in two research projects, revealing the interrelation of these levels/contexts. The article is structured according to these domains. Study 1 is a quantitative study that validates the Agonias Questionnaire in Ontario. The results of this study are used to illustrate the individual domain of our proposed model. Study 2 is an ethnography conducted in the Azorean Islands, and the results of this study are integrated to illustrate the other three levels of the model, namely family, socio-cultural, and the religio-moral levels. PMID:23720642
Testing Software Development Project Productivity Model
NASA Astrophysics Data System (ADS)
Lipkin, Ilya
Software development is an increasingly influential factor in today's business environment, and a major issue affecting software development is how an organization estimates projects. If the organization underestimates cost, schedule, and quality requirements, the end results will not meet customer needs. On the other hand, if the organization overestimates these criteria, resources that could have been used more profitably will be wasted. There is no accurate model or measure available that can guide an organization in a quest for software development, with existing estimation models often underestimating software development efforts as much as 500 to 600 percent. To address this issue, existing models usually are calibrated using local data with a small sample size, with resulting estimates not offering improved cost analysis. This study presents a conceptual model for accurately estimating software development, based on an extensive literature review and theoretical analysis based on Sociotechnical Systems (STS) theory. The conceptual model serves as a solution to bridge organizational and technological factors and is validated using an empirical dataset provided by the DoD. Practical implications of this study allow for practitioners to concentrate on specific constructs of interest that provide the best value for the least amount of time. This study outlines key contributing constructs that are unique for Software Size E-SLOC, Man-hours Spent, and Quality of the Product, those constructs having the largest contribution to project productivity. This study discusses customer characteristics and provides a framework for a simplified project analysis for source selection evaluation and audit task reviews for the customers and suppliers. Theoretical contributions of this study provide an initial theory-based hypothesized project productivity model that can be used as a generic overall model across several application domains such as IT, Command and Control, Simulation and etc... This research validates findings from previous work concerning software project productivity and leverages said results in this study. The hypothesized project productivity model provides statistical support and validation of expert opinions used by practitioners in the field of software project estimation.
Conceptual Incoherence as a Result of the Use of Multiple Historical Models in School Textbooks
ERIC Educational Resources Information Center
Gericke, Niklas M.; Hagberg, Mariana
2010-01-01
This paper explores the occurrence of conceptual incoherence in upper secondary school textbooks resulting from the use of multiple historical models. Swedish biology and chemistry textbooks, as well as a selection of books from English speaking countries, were examined. The purpose of the study was to identify which models are used to represent…
Effect of Turbulence Modeling on Hovering Rotor Flows
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Chaderjian, Neal M.; Pulliam, Thomas H.; Holst, Terry L.
2015-01-01
The effect of turbulence models in the off-body grids on the accuracy of solutions for rotor flows in hover has been investigated. Results from the Reynolds-Averaged Navier-Stokes and Laminar Off-Body models are compared. Advection of turbulent eddy viscosity has been studied to find the mechanism leading to inaccurate solutions. A coaxial rotor result is also included.
ERIC Educational Resources Information Center
Johnson, Tristan E.; Lee, Youngmin
2008-01-01
In an effort to better understand learning teams, this study examines the effects of shared mental models on team and individual performance. The results indicate that each team's shared mental model changed significantly over the time that subjects participated in team-based learning activities. The results also showed that the shared mental…
Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.
Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W
2016-08-18
A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (<7 %) between estimation and ground truth. By applying the topic model-based purification to mass spectrometric data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed that incorporation of scan-level features have the potential to lead to more accurate purification results by alleviating the loss in information as a result of integrating peaks. We believe cancer biomarker discovery studies that use mass spectrometric analysis of human biospecimens can greatly benefit from topic model-based purification of the data prior to statistical and pathway analyses.
Carlson, Jean M.
2018-01-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments. PMID:29451873
Modeling of Salmonella Contamination in the Pig Slaughterhouse.
Swart, A N; Evers, E G; Simons, R L L; Swanenburg, M
2016-03-01
In this article we present a model for Salmonella contamination of pig carcasses in the slaughterhouse. This model forms part of a larger QMRA (quantitative microbial risk assessment) on Salmonella in slaughter and breeder pigs, which uses a generic model framework that can be parameterized for European member states, to describe the entire chain from farm-to-consumption and the resultant human illness. We focus on model construction, giving mathematical formulae to describe Salmonella concentrations on individual pigs and slaughter equipment at different stages of the slaughter process. Variability among individual pigs and over slaughterhouses is incorporated using statistical distributions, and simulated by Monte Carlo iteration. We present the results over the various slaughter stages and show that such a framework is especially suitable to investigate the effect of various interventions. In this article we present the results of the slaughterhouse module for two case study member states. The model outcome represents an increase in average prevalence of Salmonella contamination and Salmonella numbers at dehairing and a decrease of Salmonella numbers at scalding. These results show good agreement when compared to several other QMRAs and microbiological studies. © 2016 Society for Risk Analysis.
Evaluation of Disaster Preparedness Based on Simulation Exercises: A Comparison of Two Models.
Rüter, Andres; Kurland, Lisa; Gryth, Dan; Murphy, Jason; Rådestad, Monica; Djalali, Ahmadreza
2016-08-01
The objective of this study was to highlight 2 models, the Hospital Incident Command System (HICS) and the Disaster Management Indicator model (DiMI), for evaluating the in-hospital management of a disaster situation through simulation exercises. Two disaster exercises, A and B, with similar scenarios were performed. Both exercises were evaluated with regard to actions, processes, and structures. After the exercises, the results were calculated and compared. In exercise A the HICS model indicated that 32% of the required positions for the immediate phase were taken under consideration with an average performance of 70%. For exercise B, the corresponding scores were 42% and 68%, respectively. According to the DiMI model, the results for exercise A were a score of 68% for management processes and 63% for management structure (staff skills). In B the results were 77% and 86%, respectively. Both models demonstrated acceptable results in relation to previous studies. More research in this area is needed to validate which of these methods best evaluates disaster preparedness based on simulation exercises or whether the methods are complementary and should therefore be used together. (Disaster Med Public Health Preparedness. 2016;10:544-548).
Jones, Eric W; Carlson, Jean M
2018-02-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments.
Interdependency in Multimodel Climate Projections: Component Replication and Result Similarity
NASA Astrophysics Data System (ADS)
Boé, Julien
2018-03-01
Multimodel ensembles are the main way to deal with model uncertainties in climate projections. However, the interdependencies between models that often share entire components make it difficult to combine their results in a satisfactory way. In this study, how the replication of components (atmosphere, ocean, land, and sea ice) between climate models impacts the proximity of their results is quantified precisely, in terms of climatological means and future changes. A clear relationship exists between the number of components shared by climate models and the proximity of their results. Even the impact of a single shared component is generally visible. These conclusions are true at both the global and regional scales. Given available data, it cannot be robustly concluded that some components are more important than others. Those results provide ways to estimate model interdependencies a priori rather than a posteriori based on their results, in order to define independence weights.
The ambiguity of drought events, a bottleneck for Amazon forest drought response modelling
NASA Astrophysics Data System (ADS)
De Deurwaerder, Hannes; Verbeeck, Hans; Baker, Timothy; Christoffersen, Bradley; Ciais, Philippe; Galbraith, David; Guimberteau, Matthieu; Kruijt, Bart; Langerwisch, Fanny; Meir, Patrick; Rammig, Anja; Thonicke, Kirsten; Von Randow, Celso; Zhang, Ke
2016-04-01
Considering the important role of the Amazon forest in the global water and carbon cycle, the prognosis of altered hydrological patterns resulting from climate change provides strong incentive for apprehending the direct implications of drought on the vegetation of this ecosystem. Dynamic global vegetation models have the potential of providing a useful tool to study drought impacts on various spatial and temporal scales. This however assumes the models being able to properly represent drought impact mechanisms. But how well do the models succeed in meeting this assumption? Within this study meteorological driver data and model output data of 4 different DGVMs, i.e. ORCHIDEE, JULES, INLAND and LPGmL, are studied. Using the palmer drought severity index (PDSI) and the mean cumulative water deficit (MWD), temporal and spatial representation of drought events are studied in the driver data and are referenced to historical extreme drought events in the Amazon. Subsequently, within the resulting temporal and spatial frame, we studied the drought impact on the above ground biomass (AGB) and gross primary production (GPP) fluxes. Flux tower data, field inventory data and the JUNG data-driven GPP product for the Amazon region are used for validation. Our findings not only suggest that the current state of the studied DGVMs is inadequate in representing Amazon droughts in general, but also highlights strong inter-model differences in drought responses. Using scatterplot-studies and input-output correlations, we provide insight in the origin of these encountered inter-model differences. In addition, we present directives of model development and improvement in scope of Amazon forest drought response modelling.
Beard, Brian B; Kainz, Wolfgang
2004-10-13
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.
Beard, Brian B; Kainz, Wolfgang
2004-01-01
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601
Statistical thermodynamics of a two-dimensional relativistic gas.
Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood
2009-03-01
In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).
Women's Endorsement of Models of Sexual Response: Correlates and Predictors.
Nowosielski, Krzysztof; Wróbel, Beata; Kowalczyk, Robert
2016-02-01
Few studies have investigated endorsement of female sexual response models, and no single model has been accepted as a normative description of women's sexual response. The aim of the study was to establish how women from a population-based sample endorse current theoretical models of the female sexual response--the linear models and circular model (partial and composite Basson models)--as well as predictors of endorsement. Accordingly, 174 heterosexual women aged 18-55 years were included in a cross-sectional study: 74 women diagnosed with female sexual dysfunction (FSD) based on DSM-5 criteria and 100 non-dysfunctional women. The description of sexual response models was used to divide subjects into four subgroups: linear (Masters-Johnson and Kaplan models), circular (partial Basson model), mixed (linear and circular models in similar proportions, reflective of the composite Basson model), and a different model. Women were asked to choose which of the models best described their pattern of sexual response and how frequently they engaged in each model. Results showed that 28.7% of women endorsed the linear models, 19.5% the partial Basson model, 40.8% the composite Basson model, and 10.9% a different model. Women with FSD endorsed the partial Basson model and a different model more frequently than did non-dysfunctional controls. Individuals who were dissatisfied with a partner as a lover were more likely to endorse a different model. Based on the results, we concluded that the majority of women endorsed a mixed model combining the circular response with the possibility of an innate desire triggering a linear response. Further, relationship difficulties, not FSD, predicted model endorsement.
Generation of a modeling and simulation system for a semi-closed plant growth chamber
NASA Technical Reports Server (NTRS)
Blackwell, A. L.; Maa, S.; Kliss, M.; Blackwell, C. C.
1993-01-01
The fluid and thermal dynamics of the environment of plants in a small controlled-environment system have been modeled. The results of the simulation under two scenarios have been compared to measurements taken during tests on the actual system. The motivation for the modeling effort and the status of the modeling exercise and system scenario studies are described. An evaluation of the model and a discussion of future studies are included.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Models of science-policy interaction: exploring approaches to Bisphenol A management in the EU.
Udovyk, O
2014-07-01
This study investigated science-policy interaction models and their limitations under conditions of uncertainty. In detail, it looked at the management of the suspected endocrine-disrupting chemical Bisphenol A (BPA). Despite growing evidence that BPA is hazardous to human and environmental health, the level of scientific uncertainty is still high and, as a result, there is significant disagreement on the actual extent and type of risk. Analysis of decision-making processes at different regulatory levels (EU, Sweden, and the Swedish municipality of Gothenburg) exposed chemicals risk management and associated science-policy interaction under uncertainty. The results of the study show that chemicals management and associated science-policy interaction follow the modern model of science-policy interaction, where science is assumed to 'speak truth to policy' and highlights existing limitations of this model under conditions of uncertainty. The study not only explores alternative models (precautionary, consensus, science-policy demarcation. and extended participation) but also shows their limitations. The study concludes that all models come with their particular underlying assumptions, strengths, and limitations. At the same time, by exposing serious limitations of the modern model, the study calls for a rethinking of the relationship between science, policy, and management. Copyright © 2014 Elsevier B.V. All rights reserved.
Phan, Huy P
2008-03-01
Although extensive research has examined epistemological beliefs, reflective thinking and learning approaches, very few studies have looked at these three theoretical frameworks in their totality. This research tested two separate structural models of epistemological beliefs, learning approaches, reflective thinking and academic performance among tertiary students over a period of 12 months. Participants were first-year Arts (N=616; 271 females, 345 males) and second-year Mathematics (N=581; 241 females, 341 males) university students. Students' epistemological beliefs were measured with the Schommer epistemological questionnaire (EQ, Schommer, 1990). Reflective thinking was measured with the reflective thinking questionnaire (RTQ, Kember et al., 2000). Student learning approaches were measured with the revised study process questionnaire (R-SPQ-2F, Biggs, Kember, & Leung, 2001). LISREL 8 was used to test two structural equation models - the cross-lag model and the causal-mediating model. In the cross-lag model involving Arts students, structural equation modelling showed that epistemological beliefs influenced student learning approaches rather than the contrary. In the causal-mediating model involving Mathematics students, the results indicate that both epistemological beliefs and learning approaches predicted reflective thinking and academic performance. Furthermore, learning approaches mediated the effect of epistemological beliefs on reflective thinking and academic performance. Results of this study are significant as they integrated the three theoretical frameworks within the one study.
Numerical simulation of damage evolution for ductile materials and mechanical properties study
NASA Astrophysics Data System (ADS)
El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.
2015-12-01
This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.
Tsareva, Daria A; Osolodkin, Dmitry I; Shulga, Dmitry A; Oliferenko, Alexander A; Pisarev, Sergey A; Palyulin, Vladimir A; Zefirov, Nikolay S
2011-03-14
Two fast empirical charge models, Kirchhoff Charge Model (KCM) and Dynamic Electronegativity Relaxation (DENR), had been developed in our laboratory previously for widespread use in drug design research. Both models are based on the electronegativity relaxation principle (Adv. Quantum Chem. 2006, 51, 139-156) and parameterized against ab initio dipole/quadrupole moments and molecular electrostatic potentials, respectively. As 3D QSAR studies comprise one of the most important fields of applied molecular modeling, they naturally have become the first topic to test our charges and thus, indirectly, the assumptions laid down to the charge model theories in a case study. Here these charge models are used in CoMFA and CoMSIA methods and tested on five glycogen synthase kinase 3 (GSK-3) inhibitor datasets, relevant to our current studies, and one steroid dataset. For comparison, eight other different charge models, ab initio through semiempirical and empirical, were tested on the same datasets. The complex analysis including correlation and cross-validation, charges robustness and predictability, as well as visual interpretability of 3D contour maps generated was carried out. As a result, our new electronegativity relaxation-based models both have shown stable results, which in conjunction with other benefits discussed render them suitable for building reliable 3D QSAR models. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Aktan, Mustafa B.
The purpose of this study was to investigate prospective science teachers' knowledge and understanding of models and modeling, and their attitudes towards the use of models in science teaching through the following research questions: What knowledge do prospective science teachers have about models and modeling in science? What understandings about the nature of models do these teachers hold as a result of their educational training? What perceptions and attitudes do these teachers hold about the use of models in their teaching? Two main instruments, semi-structured in-depth interviewing and an open-item questionnaire, were used to obtain data from the participants. The data were analyzed from an interpretative phenomenological perspective and grounded theory methods. Earlier studies on in-service science teachers' understanding about the nature of models and modeling revealed that variations exist among teachers' limited yet diverse understanding of scientific models. The results of this study indicated that variations also existed among prospective science teachers' understanding of the concept of model and the nature of models. Apparently the participants' knowledge of models and modeling was limited and they viewed models as materialistic examples and representations. I found that the teachers believed the purpose of a model is to make phenomena more accessible and more understandable. They defined models by referring to an example, a representation, or a simplified version of the real thing. I found no evidence of negative attitudes towards use of models among the participants. Although the teachers valued the idea that scientific models are important aspects of science teaching and learning, and showed positive attitudes towards the use of models in their teaching, certain factors like level of learner, time, lack of modeling experience, and limited knowledge of models appeared to be affecting their perceptions negatively. Implications for the development of science teaching and teacher education programs are discussed. Directions for future research are suggested. Overall, based on the results, I suggest that prospective science teachers should engage in more modeling activities through their preparation programs, gain more modeling experience, and collaborate with their colleagues to better understand and implement scientific models in science teaching.
NASA Astrophysics Data System (ADS)
Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.
2015-11-01
A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.
Simulation and analysis of a model dinoflagellate predator-prey system
NASA Astrophysics Data System (ADS)
Mazzoleni, M. J.; Antonelli, T.; Coyne, K. J.; Rossi, L. F.
2015-12-01
This paper analyzes the dynamics of a model dinoflagellate predator-prey system and uses simulations to validate theoretical and experimental studies. A simple model for predator-prey interactions is derived by drawing upon analogies from chemical kinetics. This model is then modified to account for inefficiencies in predation. Simulation results are shown to closely match the model predictions. Additional simulations are then run which are based on experimental observations of predatory dinoflagellate behavior, and this study specifically investigates how the predatory dinoflagellate Karlodinium veneficum uses toxins to immobilize its prey and increase its feeding rate. These simulations account for complex dynamics that were not included in the basic models, and the results from these computational simulations closely match the experimentally observed predatory behavior of K. veneficum and reinforce the notion that predatory dinoflagellates utilize toxins to increase their feeding rate.
A framework for multi-criteria assessment of model enhancements
NASA Astrophysics Data System (ADS)
Francke, Till; Foerster, Saskia; Brosinsky, Arlena; Delgado, José; Güntner, Andreas; López-Tarazón, José A.; Bronstert, Axel
2016-04-01
Modellers are often faced with unsatisfactory model performance for a specific setup of a hydrological model. In these cases, the modeller may try to improve the setup by addressing selected causes for the model errors (i.e. data errors, structural errors). This leads to adding certain "model enhancements" (MEs), e.g. climate data based on more monitoring stations, improved calibration data, modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge, guided by some sensitivity analysis at best. When multiple MEs have been implemented, a resulting improvement in model performance is not easily attributed, especially when considering different aspects of this improvement (e.g. better performance dynamics vs. reduced bias). In this study we present an approach for comparing the effect of multiple MEs in the face of multiple improvement aspects. A stepwise selection approach and structured plots help in addressing the multidimensionality of the problem. The approach is applied to a case study, which employs the meso-scale hydrosedimentological model WASA-SED for a sub-humid catchment. The results suggest that the effect of the MEs is quite diverse, with some MEs (e.g. augmented rainfall data) cause improvements for almost all aspects, while the effect of other MEs is restricted to few aspects or even deteriorate some. These specific results may not be generalizable. However, we suggest that based on studies like this, identifying the most promising MEs to implement may be facilitated.
Ohhara, Yoshihito; Oshima, Marie; Iwai, Toshinori; Kitajima, Hiroaki; Yajima, Yasuharu; Mitsudo, Kenji; Krdy, Absy; Tohnai, Iwai
2016-02-04
Patient-specific modelling in clinical studies requires a realistic simulation to be performed within a reasonable computational time. The aim of this study was to develop simple but realistic outflow boundary conditions for patient-specific blood flow simulation which can be used to clarify the distribution of the anticancer agent in intra-arterial chemotherapy for oral cancer. In this study, the boundary conditions are expressed as a zero dimension (0D) resistance model of the peripheral vessel network based on the fractal characteristics of branching arteries combined with knowledge of the circulatory system and the energy minimization principle. This resistance model was applied to four patient-specific blood flow simulations at the region where the common carotid artery bifurcates into the internal and external carotid arteries. Results of these simulations with the proposed boundary conditions were compared with the results of ultrasound measurements for the same patients. The pressure was found to be within the physiological range. The difference in velocity in the superficial temporal artery results in an error of 5.21 ± 0.78 % between the numerical results and the measurement data. The proposed outflow boundary conditions, therefore, constitute a simple resistance-based model and can be used for performing accurate simulations with commercial fluid dynamics software.
Modeling of high-strength concrete-filled FRP tube columns under cyclic load
NASA Astrophysics Data System (ADS)
Ong, Kee-Yen; Ma, Chau-Khun; Apandi, Nazirah Mohd; Awang, Abdullah Zawawi; Omar, Wahid
2018-05-01
The behavior of high-strength concrete (HSC) - filled fiber-reinforced-polymer (FRP) tubes (HSCFFTs) column subjected to cyclic lateral loading is presented in this paper. As the experimental study is costly and time consuming, a finite element analysis (FEA) is chosen for the study. Most of the previous studies have focused on examining the axial load behavior of HSCFFT column instead of seismic behavior. The seismic behavior of HSCFFT columns has been the main interest in the industry. The key objective of this research is to develop a reliable numerical non-linear FEA model to represent the seismic behavior of such column. A FEA model was developed using the Concrete Damaged Plasticity Model (CDPM) available in the finite element software package (ABAQUS). Comparisons between experimental results from previous research and the predicted results were made based on load versus displacement relationships and ultimate strength of the column. The results showed that the column increased in ductility and able to deform to a greater extent with the increase of the FRP confinement ratio. With the increase of confinement ratio, HSCFFT column achieved a higher moment resistance, thus indicated a higher failure strength in the column under cyclic lateral load. It was found that the proposed FEA model can regenerate the experimental results with adequate accuracy.
Assessment of the performance of rigid pavement back-calculation through finite element modeling
NASA Astrophysics Data System (ADS)
Shoukry, Samir N.; William, Gergis W.; Martinelli, David R.
1999-02-01
This study focuses on examining the behavior of rigid pavement layers during the Falling Weight Deflectometer (FWD) test. Factors affecting the design of a concrete slab, such as whether the joints are doweled or undoweled and the spacing between the transverse joints, were considered in this study. Explicit finite element analysis was employed to investigate pavement layers' responses to the action of the impulse of the FWD test. Models of various dimensions were developed to satisfy the factors under consideration. The accuracy of the finite element models developed in this investigation was verified by comparing the finite element- generated deflection basin with that experimentally measured during an actual test. The results showed that the measured deflection basin can be reproduced through finite element modeling of the pavement structure. The resulting deflection basins from the use FE modeling was processed in order to backcalculate pavement layer moduli. This approach provides a method for the evaluation of the performance of existing backcalculation programs which are based on static elastic layer analysis. Based upon the previous studies conducted for the selection of software, three different backcalculation programs were chosen for the evaluation: MODULUS5.0, EVERCALC4.0, and MODCOMP3. The results indicate that ignoring the dynamic nature of the load may lead to crude results, especially during backcalculation procedures.
Mehrian, Mohammad; Guyot, Yann; Papantoniou, Ioannis; Olofsson, Simon; Sonnaert, Maarten; Misener, Ruth; Geris, Liesbet
2018-03-01
In regenerative medicine, computer models describing bioreactor processes can assist in designing optimal process conditions leading to robust and economically viable products. In this study, we started from a (3D) mechanistic model describing the growth of neotissue, comprised of cells, and extracellular matrix, in a perfusion bioreactor set-up influenced by the scaffold geometry, flow-induced shear stress, and a number of metabolic factors. Subsequently, we applied model reduction by reformulating the problem from a set of partial differential equations into a set of ordinary differential equations. Comparing the reduced model results to the mechanistic model results and to dedicated experimental results assesses the reduction step quality. The obtained homogenized model is 10 5 fold faster than the 3D version, allowing the application of rigorous optimization techniques. Bayesian optimization was applied to find the medium refreshment regime in terms of frequency and percentage of medium replaced that would maximize neotissue growth kinetics during 21 days of culture. The simulation results indicated that maximum neotissue growth will occur for a high frequency and medium replacement percentage, a finding that is corroborated by reports in the literature. This study demonstrates an in silico strategy for bioprocess optimization paying particular attention to the reduction of the associated computational cost. © 2017 Wiley Periodicals, Inc.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
Starting points for the study of non-Fermi liquid-like properties of FeCrAs
NASA Astrophysics Data System (ADS)
O'Brien, Patrick James
FeCrAs exhibits non-Fermi liquid-like behavior because of its odd combination of thermodynamic, transport, and magnetic properties. In particular, the resistivity of FeCrAs is not characteristic of a metal or an insulator and so remains a mystery. In this thesis, we seek a model to describe its properties. In FeCrAs, local moments reside on the Cr sites, and there is some conduction. We study the simplest possible model on the kagome lattice that features local moments and itinerant electrons, the kagome Kondo Lattice Model. We present the phase diagram of this model, which features a host of complex spin orders, one of which is the √3 x √3, the experimentally observed magnetic ground state in FeCrAs. The kagome Kondo Lattice Model, having one itinerant d-orbital band on the kagome lattice, does not fully capture the microscopic physics of FeCrAs. The kagome Kondo Lattice Model also will not de- scribe the mutilation of the Fermi surface. To investigate the microscopic properties, we calculated LDA and LDA+U results. These results and GGA results from another group all exhibit high d-orbital density of states at the Fermi energy as well as low p-orbital density of states at the Fermi energy. The DFT results motivated us to construct a model based on the chemistry and full geometry of the FeCrAs crystal. The model we construct is an effective hopping model consisting of only d-orbital operators that we call the Optimal Overlap Hopping Model (OOHM). We calculate the band structure that results from the OOHM, and this band structure can be compared to ARPES measurements. As an example of how one can use the OOHM, we calculate a dynamic spin structure factor from within the OOHM, and we compare it to neutron scattering data. We consider both the OOHM and the Kondo Lattice Model on the kagome lattice as starting points from which we can launch studies of FeCrAs, and we present the existing theories for FeCrAs on a metallicity spectrum to illustrate the various perspectives from which FeCrAs is studied.
NASA Astrophysics Data System (ADS)
Afkhamipour, Morteza; Mofarahi, Masoud; Borhani, Tohid Nejad Ghaffar; Zanganeh, Masoud
2018-03-01
In this study, artificial neural network (ANN) and thermodynamic models were developed for prediction of the heat capacity ( C P ) of amine-based solvents. For ANN model, independent variables such as concentration, temperature, molecular weight and CO2 loading of amine were selected as the inputs of the model. The significance of the input variables of the ANN model on the C P values was investigated statistically by analyzing of correlation matrix. A thermodynamic model based on the Redlich-Kister equation was used to correlate the excess molar heat capacity ({C}_P^E) data as function of temperature. In addition, the effects of temperature and CO2 loading at different concentrations of conventional amines on the C P values were investigated. Both models were validated against experimental data and very good results were obtained between two mentioned models and experimental data of C P collected from various literatures. The AARD between ANN model results and experimental data of C P for 47 systems of amine-based solvents studied was 4.3%. For conventional amines, the AARD for ANN model and thermodynamic model in comparison with experimental data were 0.59% and 0.57%, respectively. The results showed that both ANN and Redlich-Kister models can be used as a practical tool for simulation and designing of CO2 removal processes by using amine solutions.
Adopting adequate leaching requirement for practical response models of basil to salinity
NASA Astrophysics Data System (ADS)
Babazadeh, Hossein; Tabrizi, Mahdi Sarai; Darvishi, Hossein Hassanpour
2016-07-01
Several mathematical models are being used for assessing plant response to salinity of the root zone. Objectives of this study included quantifying the yield salinity threshold value of basil plants to irrigation water salinity and investigating the possibilities of using irrigation water salinity instead of saturated extract salinity in the available mathematical models for estimating yield. To achieve the above objectives, an extensive greenhouse experiment was conducted with 13 irrigation water salinity levels, namely 1.175 dS m-1 (control treatment) and 1.8 to 10 dS m-1. The result indicated that, among these models, the modified discount model (one of the most famous root water uptake model which is based on statistics) produced more accurate results in simulating the basil yield reduction function using irrigation water salinities. Overall the statistical model of Steppuhn et al. on the modified discount model and the math-empirical model of van Genuchten and Hoffman provided the best results. In general, all of the statistical models produced very similar results and their results were better than math-empirical models. It was also concluded that if enough leaching was present, there was no significant difference between the soil salinity saturated extract models and the models using irrigation water salinity.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Modelling the social and structural determinants of tuberculosis: opportunities and challenges
Boccia, D.; Dodd, P. J.; Lönnroth, K.; Dowdy, D. W.; Siroka, A.; Kimerling, M. E.; White, R. G.; Houben, R. M. G. J.
2017-01-01
INTRODUCTION: Despite the close link between tuberculosis (TB) and poverty, most mathematical models of TB have not addressed underlying social and structural determinants. OBJECTIVE: To review studies employing mathematical modelling to evaluate the epidemiological impact of the structural determinants of TB. METHODS: We systematically searched PubMed and personal libraries to identify eligible articles. We extracted data on the modelling techniques employed, research question, types of structural determinants modelled and setting. RESULTS: From 232 records identified, we included eight articles published between 2008 and 2015; six employed population-based dynamic TB transmission models and two non-dynamic analytic models. Seven studies focused on proximal TB determinants (four on nutritional status, one on wealth, one on indoor air pollution, and one examined overcrowding, socioeconomic and nutritional status), and one focused on macro-economic influences. CONCLUSIONS: Few modelling studies have attempted to evaluate structural determinants of TB, resulting in key knowledge gaps. Despite the challenges of modelling such a complex system, models must broaden their scope to remain useful for policy making. Given the intersectoral nature of the interrelations between structural determinants and TB outcomes, this work will require multidisciplinary collaborations. A useful starting point would be to focus on developing relatively simple models that can strengthen our knowledge regarding the potential effect of the structural determinants on TB outcomes. PMID:28826444
Gallagher, J
2016-04-15
Personal measurement studies and modelling investigations are used to examine pollutant exposure for pedestrians in the urban environment: each presenting various strengths and weaknesses in relation to labour and equipment costs, a sufficient sampling period and the accuracy of results. This modelling exercise considers the potential benefits of modelling results over personal measurement studies and aims to demonstrate how variations in fleet composition affects exposure results (presented as mean concentrations along the centre of both footpaths) in different traffic scenarios. A model of Pearse Street in Dublin, Ireland was developed by combining a computational fluid dynamic (CFD) model and a semi-empirical equation to simulate pollutant dispersion in the street. Using local NOx concentrations, traffic and meteorological data from a two-week period in 2011, the model were validated and a good fit was presented. To explore the long-term variations in personal exposure due to variations in fleet composition, synthesised traffic data was used to compare short-term personal exposure data (over a two-week period) with the results for an extended one-year period. Personal exposure during the two-week period underestimated the one-year results by between 8% and 65% on adjacent footpaths. The findings demonstrate the potential for relative differences in pedestrian exposure to exist between the north and south footpaths due to changing wind conditions in both peak and off-peak traffic scenarios. This modelling approach may help overcome potential under- or over-estimations of concentrations in personal measurement studies on the footpaths. Further research aims to measure pollutant concentrations on adjacent footpaths in different traffic and wind conditions and to develop a simpler modelling system to identify pollutant hotspots on our city footpaths so that urban planners can implement improvement strategies to improve urban air quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling and design of challenge tests: Inflammatory and metabolic biomarker study examples.
Gabrielsson, Johan; Hjorth, Stephan; Vogg, Barbara; Harlfinger, Stephanie; Gutierrez, Pablo Morentin; Peletier, Lambertus; Pehrson, Rikard; Davidsson, Pia
2015-01-25
Given the complexity of pharmacological challenge experiments, it is perhaps not surprising that design and analysis, and in turn interpretation and communication of results from a quantitative point of view, is often suboptimal. Here we report an inventory of common designs sampled from anti-inflammatory, respiratory and metabolic disease drug discovery studies, all of which are based on animal models of disease involving pharmacological and/or patho/physiological interaction challenges. The corresponding data are modeled and analyzed quantitatively, the merits of the respective approach discussed and inferences made with respect to future design improvements. Although our analysis is limited to these disease model examples, the challenge approach is generally applicable to the vast majority of pharmacological intervention studies. In the present five Case Studies results from pharmacodynamic effect models from different therapeutic areas were explored and analyzed according to five typical designs. Plasma exposures of test compounds were assayed by either liquid chromatography/mass spectrometry or ligand binding assays. To describe how drug intervention can regulate diverse processes, turnover models of test compound-challenger interaction, transduction processes, and biophase time courses were applied for biomarker response in eosinophil count, IL6 response, paw-swelling, TNFα response and glucose turnover in vivo. Case Study 1 shows results from intratracheal administration of Sephadex, which is a glucocorticoid-sensitive model of airway inflammation in rats. Eosinophils in bronchoalveolar fluid were obtained at different time points via destructive sampling and then regressed by the mixed-effects modeling. A biophase function of the Sephadex time course was inferred from the modeled eosinophil time courses. In Case Study 2, a mouse model showed that the time course of cytokine-induced IL1β challenge was altered with or without drug intervention. Anakinra reversed the IL1β induced cytokine IL6 response in a dose-dependent manner. This Case Study contained time courses of test compound (drug), challenger (IL1β) and cytokine response (IL6), which resulted in high parameter precision. Case Study 3 illustrates collagen-induced arthritis progression in the rat. Swelling scores (based on severity of hind paw swelling) were used to describe arthritis progression after the challenge and the inhibitory effect of two doses of an orally administered test compound. In Case Study 4, a cynomolgus monkey model for lipopolysaccharide LPS-induced TNFα synthesis and/or release was investigated. This model provides integrated information on pharmacokinetics and in vivo potency of the test compounds. Case Study 5 contains data from an oral glucose tolerance test in rats, where the challenger is the same as the pharmacodynamic response biomarker (glucose). It is therefore convenient to model the extra input of glucose simultaneously with baseline data and during intervention of a glucose-lowering compound at different dose levels. Typically time-series analyses of challenger- and biomarker-time data are necessary if an accurate and precise estimate of the pharmacodynamic properties of a test compound is sought. Erosion of data, resulting in the single-point assessment of drug action after a challenge test, should generally be avoided. This is particularly relevant for situations where one expects time-curve shifts, tolerance/rebound, impact of disease, or hormetic concentration-response relationships to occur. Copyright © 2014 Elsevier B.V. All rights reserved.
Modeling Requirements for Cohort and Register IT.
Stäubert, Sebastian; Weber, Ulrike; Michalik, Claudia; Dress, Jochen; Ngouongo, Sylvie; Stausberg, Jürgen; Winter, Alfred
2016-01-01
The project KoRegIT (funded by TMF e.V.) aimed to develop a generic catalog of requirements for research networks like cohort studies and registers (KoReg). The catalog supports such kind of research networks to build up and to manage their organizational and IT infrastructure. To make transparent the complex relationships between requirements, which are described in use cases from a given text catalog. By analyzing and modeling the requirements a better understanding and optimizations of the catalog are intended. There are two subgoals: a) to investigate one cohort study and two registers and to model the current state of their IT infrastructure; b) to analyze the current state models and to find simplifications within the generic catalog. Processing the generic catalog was performed by means of text extraction, conceptualization and concept mapping. Then methods of enterprise architecture planning (EAP) are used to model the extracted information. To work on objective a) questionnaires are developed by utilizing the model. They are used for semi-structured interviews, whose results are evaluated via qualitative content analysis. Afterwards the current state was modeled. Objective b) was done by model analysis. A given generic text catalog of requirements was transferred into a model. As result of objective a) current state models of one existing cohort study and two registers are created and analyzed. An optimized model called KoReg-reference-model is the result of objective b). It is possible to use methods of EAP to model requirements. This enables a better overview of the partly connected requirements by means of visualization. The model based approach also enables the analysis and comparison of the empirical data from the current state models. Information managers could reduce the effort of planning the IT infrastructure utilizing the KoReg-reference-model. Modeling the current state and the generation of reports from the model, which could be used as requirements specification for bids, is supported, too.
A brain-region-based meta-analysis method utilizing the Apriori algorithm.
Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao
2016-05-18
Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.
Testing flow diversion in animal models: a systematic review.
Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E
2016-04-01
Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.
Study of vibrational modes and specific heat of wurtzite phase of BN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Daljit, E-mail: daljit.jt@gmail.com; Sinha, M. M.
2016-05-06
In these days of nanotechnology the materials like BN is of utmost importance as in hexagonal phase it is among hardest materials. The phonon mode study of the materials is most important factor to find structural and thermodynamcal properties. To study the phonons de launey angular force (DAF) constant model is best suited as it involves many particle interactions. Therefore in this presentation we have studied the lattice dynamical properties and specific heat of BN in wurtzite phase using DAF model. The obtained results are in excellent agreement with existing results.
Artificial Neural Network Modeling of Pt/C Cathode Degradation in PEM Fuel Cells
NASA Astrophysics Data System (ADS)
Maleki, Erfan; Maleki, Nasim
2016-08-01
Use of computational modeling with a few experiments is considered useful to obtain the best possible result for a final product, without performing expensive and time-consuming experiments. Proton exchange membrane fuel cells (PEMFCs) can produce clean electricity, but still require further study. An oxygen reduction reaction (ORR) takes place at the cathode, and carbon-supported platinum (Pt/C) is commonly used as an electrocatalyst. The harsh conditions during PEMFC operation result in Pt/C degradation. Observation of changes in the Pt/C layer under operating conditions provides a tool to study the lifetime of PEMFCs and overcome durability issues. Recently, artificial neural networks (ANNs) have been used to solve, predict, and optimize a wide range of scientific problems. In this study, several rates of change at the cathode were modeled using ANNs. The backpropagation (BP) algorithm was used to train the network, and experimental data were employed for network training and testing. Two different models are constructed in the present study. First, the potential cycles, temperature, and humidity are used as inputs to predict the resulting Pt dissolution rate of the Pt/C at the cathode as the output parameter of the network. Thereafter, the Pt dissolution rate and Pt ion diffusivity are regarded as inputs to obtain values of the Pt particle radius change rate, Pt mass loss rate, and surface area loss rate as outputs. The networks are finely tuned, and the modeling results agree well with experimental data. The modeled responses of the ANNs are acceptable for this application.
Suggestion of a Numerical Model for the Blood Glucose Adjustment with Ingesting a Food
NASA Astrophysics Data System (ADS)
Yamamoto, Naokatsu; Takai, Hiroshi
In this study, we present a numerical model of the time dependence of blood glucose value after ingesting a meal. Two numerical models are proposed in this paper to explain a digestion mechanism and an adjustment mechanism of blood glucose in the body, respectively. It is considered that models are exhibited by using simple equations with a transfer function and a block diagram. Additionally, the time dependence of blood glucose was measured, when subjects ingested a sucrose or a starch. As a result, it is clear that the calculated result of models using a computer can be fitted very well to the measured result of the time dependence of blood glucose. Therefore, it is considered that the digestion model and the adjustment model are useful models in order to estimate a blood glucose value after ingesting meals.
Replication of genetic associations as pseudoreplication due to shared genealogy.
Rosenberg, Noah A; Vanliere, Jenna M
2009-09-01
The genotypes of individuals in replicate genetic association studies have some level of correlation due to shared descent in the complete pedigree of all living humans. As a result of this genealogical sharing, replicate studies that search for genotype-phenotype associations using linkage disequilibrium between marker loci and disease-susceptibility loci can be considered as "pseudoreplicates" rather than true replicates. We examine the size of the pseudoreplication effect in association studies simulated from evolutionary models of the history of a population, evaluating the excess probability that both of a pair of studies detect a disease association compared to the probability expected under the assumption that the two studies are independent. Each of nine combinations of a demographic model and a penetrance model leads to a detectable pseudoreplication effect, suggesting that the degree of support that can be attributed to a replicated genetic association result is less than that which can be attributed to a replicated result in a context of true independence.
Replication of genetic associations as pseudoreplication due to shared genealogy
Rosenberg, Noah A.; VanLiere, Jenna M.
2009-01-01
The genotypes of individuals in replicate genetic association studies have some level of correlation due to shared descent in the complete pedigree of all living humans. As a result of this genealogical sharing, replicate studies that search for genotype-phenotype associations using linkage disequilibrium between marker loci and disease-susceptibility loci can be considered “pseudoreplicates” rather than true replicates. We examine the size of the pseudoreplication effect in association studies simulated from evolutionary models of the history of a population, evaluating the excess probability that both of a pair of studies detect a disease association compared to the probability expected under the assumption that the two studies are independent. Each of nine combinations of a demographic model and a penetrance model leads to a detectable pseudoreplication effect, suggesting that the degree of support that can be attributed to a replicated genetic association result is less than that which can be attributed to a replicated result in a context of true independence. PMID:19191270
Multicategorical Spline Model for Item Response Theory.
ERIC Educational Resources Information Center
Abrahamowicz, Michal; Ramsay, James O.
1992-01-01
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
2013-01-01
Background The volume of influenza pandemic modelling studies has increased dramatically in the last decade. Many models incorporate now sophisticated parameterization and validation techniques, economic analyses and the behaviour of individuals. Methods We reviewed trends in these aspects in models for influenza pandemic preparedness that aimed to generate policy insights for epidemic management and were published from 2000 to September 2011, i.e. before and after the 2009 pandemic. Results We find that many influenza pandemics models rely on parameters from previous modelling studies, models are rarely validated using observed data and are seldom applied to low-income countries. Mechanisms for international data sharing would be necessary to facilitate a wider adoption of model validation. The variety of modelling decisions makes it difficult to compare and evaluate models systematically. Conclusions We propose a model Characteristics, Construction, Parameterization and Validation aspects protocol (CCPV protocol) to contribute to the systematisation of the reporting of models with an emphasis on the incorporation of economic aspects and host behaviour. Model reporting, as already exists in many other fields of modelling, would increase confidence in model results, and transparency in their assessment and comparison. PMID:23651557
Azadi, Sama; Karimi-Jashni, Ayoub
2016-02-01
Predicting the mass of solid waste generation plays an important role in integrated solid waste management plans. In this study, the performance of two predictive models, Artificial Neural Network (ANN) and Multiple Linear Regression (MLR) was verified to predict mean Seasonal Municipal Solid Waste Generation (SMSWG) rate. The accuracy of the proposed models is illustrated through a case study of 20 cities located in Fars Province, Iran. Four performance measures, MAE, MAPE, RMSE and R were used to evaluate the performance of these models. The MLR, as a conventional model, showed poor prediction performance. On the other hand, the results indicated that the ANN model, as a non-linear model, has a higher predictive accuracy when it comes to prediction of the mean SMSWG rate. As a result, in order to develop a more cost-effective strategy for waste management in the future, the ANN model could be used to predict the mean SMSWG rate. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Al Janaideh, Mohammad; Aljanaideh, Omar
2018-05-01
Apart from the output-input hysteresis loops, the magnetostrictive actuators also exhibit asymmetry and saturation, particularly under moderate to large magnitude inputs and at relatively higher frequencies. Such nonlinear input-output characteristics could be effectively characterized by a rate-dependent Prandtl-Ishlinskii model in conjunction with a function of deadband operators. In this study, an inverse model is formulated to seek real-time compensation of rate-dependent and asymmetric hysteresis nonlinearities of a Terfenol-D magnetostrictive actuator. The inverse model is formulated with the inverse of the rate-dependent Prandtl-Ishlinskii model, satisfying the threshold dilation condition, with the inverse of the deadband function. The inverse model was subsequently applied to the hysteresis model as a feedforward compensator. The proposed compensator is applied as a feedforward compensator to the actuator hardware to study its potential for rate-dependent and asymmetric hysteresis loops. The experimental results are obtained under harmonic and complex harmonic inputs further revealed that the inverse compensator can substantially suppress the hysteresis and output asymmetry nonlinearities in the entire frequency range considered in the study.